← Vault Index
Source: frameworks/kit-constraint-priority-matrix/04-constraint-priority-matrix-quality.md

Constraint Priority Matrix — Quality Checklist

Last updated: February 18, 2026 100-point scale. 90+ = pass. Below 90 = revise before delivering. This checklist is called by Process 6 of the workflow. Every output gets scored before delivery.


Section 1: Process 1 — Client Context (10 points)

#CheckPointsPass/Fail
1.1GPS has all three dimensions: Position, Direction, Speed3
1.2GPS Direction anchored to the client's stated goal or vision in their own words. If the client has done vision work or stated an annual/engagement goal, that language is the primary Direction anchor. Advisor synthesis of unstated goals is secondary context (labeled as "horizon this unlocks" or equivalent), not the Direction itself. If no stated goal exists, Direction is flagged as advisor-inferred.4
1.3Operating mode correctly identified (Mode 1 = first run, Mode 2 = returning no project plan, Mode 3 = returning with active project plan)2
1.4All data sources listed with availability status, including group session data and stated goal/vision artifacts if they exist1

Section 1 Total: ___ / 10


Section 2: Process 2 — Categorize & Tag (25 points)

#CheckPointsPass/Fail
2.1Every constraint has all required fields: Category, Type (+ Modifier if applicable), Blocks, Speed, Opportunity Cost, Status, Recurrence, Pattern Flag, Client's Words5
2.2Every constraint typed as Upstream or Downstream — no untyped constraints3
2.3Upstream constraints explain what they're upstream OF (which constraints they cause)3
2.4Downstream constraints reference their upstream cause3
2.5Category assigned from the locked list only (Advisory Website, Operational Systems, Visibility Engine, Pipeline Infrastructure, Service Line Launch, Product Suite)2
2.6"Blocks" field references a specific GPS Direction goal — traced to the client's stated goal language, not a vague or inferred statement3
2.7Opportunity Cost is specific and concrete — not generic ("revenue loss" without context)3
2.8Client's own words included for each constraint (direct quote or paraphrase attributed to client)3

Additional checks (apply when relevant):

#CheckPointsPass/Fail
2.9Behavioral constraints (🧠 modifier) identified where the gap is a pattern of human behavior, not a missing system. Deployment approach addresses the behavioral mechanism — not just system design.0 (required, not scored separately — failure to identify a behavioral constraint when evidence supports it = revise)
2.10Every constraint's status is supported by evidence, not inferred from absence of evidence. If the most recent data is stale and no other source provides a current signal, status is Unknown — not Open, Stalled, or any other assumed status.0 (required, not scored separately — failure = revise)

Mode 3 additional checks (apply only when project plan is active):

#CheckPointsPass/Fail
2.11Initiative Status section present — every build in the project plan reported with current deployment step0 (required, not scored separately)
2.12Constraints in the active build sequence are NOT re-categorized or re-tagged — status update only0 (failure = reject output)
2.13New constraints include a placement recommendation (slot into sequence / parallel quick-deploy / queue / monitor)0 (required, not scored separately)

Section 2 Total: ___ / 25


Section 3: Process 3 — Pattern Detection (20 points)

#CheckPointsPass/Fail
3.1All six pattern types scanned: Recurring, Escalating, Same Root Cause, Upstream Connections, Resurfaced, Solve Together3
3.2Correct terminology used — no retired terms (no "synergy," "compounding," "cluster")3
3.3Correct emoji used for each pattern flag per 02-constraint-priority-matrix-terminology.md2
3.4Solve Together pairs show all three tests explicitly: (a) A helps B, (b) B helps A, (c) shared time window creates multiplier4
3.5ALL unique pairs of open constraints tested for Solve Together — not pre-filtered by judgment. Pairs involving Unknown-status constraints flagged as Conditional Solve Together (↔️❓) with rationale.3
3.6Pattern Summary counts match the actual flags on individual constraint entries — no independent counts3
3.7Every constraint number in the Pattern Summary matches the numbering in the constraint entries2

Mode 3 additional check:

#CheckPointsPass/Fail
3.8Solve Together testing not re-run on constraints already sequenced in builds — only new constraints tested0 (failure = reject output)

Section 3 Total: ___ / 20


Section 4: Process 4 — Prioritization (20 points)

#CheckPointsPass/Fail
4.1Every open constraint assigned to a tier (1–4). Unknown-status constraints assigned a provisional tier with "pending status confirmation" noted.3
4.2Tier assignments follow the logic: Tier 1 = Upstream + This Week + High Direction Impact; Tier 2 = dependent on Tier 1 or secondary; Tier 3 = downstream, monitor; Tier 4 = self-resolving4
4.3Additional weight applied correctly: +1 for Resurfaced, Pattern Flag, High Opportunity Cost, Same Root Cause3
4.4Rationale provided for each constraint's tier placement — not just the tier label3
4.5Solve Together pairs noted in rationale for BOTH constraints in the pair. Conditional Solve Together pairs noted with the condition that must be resolved.3
4.6Solve Together recommendation specifies: shared window, multiplier effect, combined initiative structure2
4.7Constraint numbering is sequential and consistent with all other outputs2

Mode 3 additional check:

#CheckPointsPass/Fail
4.8Constraints in the active build sequence retain their build order — they are NOT re-tiered0 (failure = reject output)

Section 4 Total: ___ / 20


Section 5: Process 5 — Session Prep (15 points)

#CheckPointsPass/Fail
5.1Prioritized Constraint List present and matches Process 4 output exactly (or Initiative Status table for Mode 3)2
5.2Visual Matrix grid present with correct Category × Speed layout2
5.3Visual Matrix flags match individual constraint entry flags — no mismatches3
5.4Session Prep Brief includes: Recommended Focus, Why This One, Pre-Work, Questions to Ask, Capability Category, Pattern Watch3
5.5Questions to Ask are specific to the recommended constraint — not generic advisory questions. Includes status check questions for every Unknown-status constraint.3
5.6Pattern Watch references specific constraints by number, matching the prioritized list2

Mode 3 additional checks:

#CheckPointsPass/Fail
5.7Visual Matrix uses "Active Build" and "Build Sequence (Queue)" columns — not "This Week" and "Queue"0 (required, not scored separately)
5.8Session Prep recommended focus follows the project plan — what build is active, what's due0 (failure = reject output)
5.9New constraints positioned as watch items or secondary — not competing priorities against active builds0 (required, not scored separately)

Section 5 Total: ___ / 15


Section 6: Cross-Output Validation (10 points)

#CheckPointsPass/Fail
6.1Constraint numbers (C1, C2, etc.) are consistent across ALL outputs: entries, pattern summary, prioritized list, visual matrix, session prep3
6.2Every Solve Together pair (confirmed and conditional) referenced in any output matches the pair flagged on the individual entries3
6.3No independently generated summary data — all counts, flags, and references are derived from the constraint entries2
6.4If a mismatch exists between a summary and an entry, the entry is treated as source of truth2

Section 6 Total: ___ / 10


Scoring

SectionPoints AvailableScore
1. Client Context10
2. Categorize & Tag25
3. Pattern Detection20
4. Prioritization20
5. Session Prep15
6. Cross-Output Validation10
TOTAL100

Pass threshold: 90 / 100

Mode 3 reject triggers: Checks 2.12, 3.8, 4.8, and 5.8 are not scored — they are pass/reject. If any of these fail, the output is rejected regardless of the point score. These prevent the matrix from overriding the project plan.

Required revise triggers (all modes): Checks 2.9, 2.10 are not scored — they are pass/revise. If either fails, the output must be revised before delivery regardless of the point score. These prevent misdiagnosis of behavioral constraints and inference from data gaps.


Common Failure Modes

FailureWhere It Shows UpHow to Fix
Direction built from advisor inference, not client's stated goalSection 1Ask: has this client stated a goal for the current year? If yes, that's Direction. If no, flag as inferred.
Behavioral constraint collapsed into system constraintSection 2If the gap is what someone does rather than what doesn't exist, it's a behavioral constraint (🧠). Give it its own entry with a behavioral deployment approach.
Status inferred from absence of dataSection 2If the last evidence is stale and no other source fills the gap, status is Unknown. Do not diagnose from silence. Generate a status check question.
Retired terminology usedSections 3, 4, 5Check every flag against 02-constraint-priority-matrix-terminology.md before finalizing
Constraint numbers drift across outputsSection 6Number once in Process 2, carry through — never renumber mid-output
Solve Together pairs not fully testedSection 3Run all three tests on every unique pair, document the test, not just the result. Flag pairs involving Unknown constraints as Conditional.
Generic opportunity costSection 2Rewrite with specifics: what's lost, how much, how fast
Pattern Summary doesn't match flagsSections 3, 6Build summary LAST by pulling directly from flagged entries — never write independently
GPS Direction goal vague in "Blocks" fieldSection 2Pull exact language from client's stated goal — "Jana owns operations" not "improve delegation"
Downstream constraint missing upstream referenceSection 2Every downstream must name its upstream cause by constraint number
Parallel root causes not identifiedSections 2, 3When two upstream constraints feed the same downstream symptoms through different mechanisms (structural vs. behavioral), flag both and note the parallel relationship
Re-prioritized sequenced builds (Mode 3)Section 4If a project plan exists with dated builds, the matrix confirms status — it does not re-rank. New constraints are tiered relative to the active initiative.
Session prep competes with project plan (Mode 3)Section 5Recommended focus follows the active build. New constraints are watch items, not competing priorities.
Wrong output formatAll sectionsWeekly output is markdown. HTML is for client-facing presentations only (via 06).