Constraint Priority Matrix — Quality Checklist
Last updated: February 18, 2026 100-point scale. 90+ = pass. Below 90 = revise before delivering. This checklist is called by Process 6 of the workflow. Every output gets scored before delivery.
Section 1: Process 1 — Client Context (10 points)
| # | Check | Points | Pass/Fail |
|---|---|---|---|
| 1.1 | GPS has all three dimensions: Position, Direction, Speed | 3 | |
| 1.2 | GPS Direction anchored to the client's stated goal or vision in their own words. If the client has done vision work or stated an annual/engagement goal, that language is the primary Direction anchor. Advisor synthesis of unstated goals is secondary context (labeled as "horizon this unlocks" or equivalent), not the Direction itself. If no stated goal exists, Direction is flagged as advisor-inferred. | 4 | |
| 1.3 | Operating mode correctly identified (Mode 1 = first run, Mode 2 = returning no project plan, Mode 3 = returning with active project plan) | 2 | |
| 1.4 | All data sources listed with availability status, including group session data and stated goal/vision artifacts if they exist | 1 |
Section 1 Total: ___ / 10
Section 2: Process 2 — Categorize & Tag (25 points)
| # | Check | Points | Pass/Fail |
|---|---|---|---|
| 2.1 | Every constraint has all required fields: Category, Type (+ Modifier if applicable), Blocks, Speed, Opportunity Cost, Status, Recurrence, Pattern Flag, Client's Words | 5 | |
| 2.2 | Every constraint typed as Upstream or Downstream — no untyped constraints | 3 | |
| 2.3 | Upstream constraints explain what they're upstream OF (which constraints they cause) | 3 | |
| 2.4 | Downstream constraints reference their upstream cause | 3 | |
| 2.5 | Category assigned from the locked list only (Advisory Website, Operational Systems, Visibility Engine, Pipeline Infrastructure, Service Line Launch, Product Suite) | 2 | |
| 2.6 | "Blocks" field references a specific GPS Direction goal — traced to the client's stated goal language, not a vague or inferred statement | 3 | |
| 2.7 | Opportunity Cost is specific and concrete — not generic ("revenue loss" without context) | 3 | |
| 2.8 | Client's own words included for each constraint (direct quote or paraphrase attributed to client) | 3 |
Additional checks (apply when relevant):
| # | Check | Points | Pass/Fail |
|---|---|---|---|
| 2.9 | Behavioral constraints (🧠 modifier) identified where the gap is a pattern of human behavior, not a missing system. Deployment approach addresses the behavioral mechanism — not just system design. | 0 (required, not scored separately — failure to identify a behavioral constraint when evidence supports it = revise) | |
| 2.10 | Every constraint's status is supported by evidence, not inferred from absence of evidence. If the most recent data is stale and no other source provides a current signal, status is Unknown — not Open, Stalled, or any other assumed status. | 0 (required, not scored separately — failure = revise) |
Mode 3 additional checks (apply only when project plan is active):
| # | Check | Points | Pass/Fail |
|---|---|---|---|
| 2.11 | Initiative Status section present — every build in the project plan reported with current deployment step | 0 (required, not scored separately) | |
| 2.12 | Constraints in the active build sequence are NOT re-categorized or re-tagged — status update only | 0 (failure = reject output) | |
| 2.13 | New constraints include a placement recommendation (slot into sequence / parallel quick-deploy / queue / monitor) | 0 (required, not scored separately) |
Section 2 Total: ___ / 25
Section 3: Process 3 — Pattern Detection (20 points)
| # | Check | Points | Pass/Fail |
|---|---|---|---|
| 3.1 | All six pattern types scanned: Recurring, Escalating, Same Root Cause, Upstream Connections, Resurfaced, Solve Together | 3 | |
| 3.2 | Correct terminology used — no retired terms (no "synergy," "compounding," "cluster") | 3 | |
| 3.3 | Correct emoji used for each pattern flag per 02-constraint-priority-matrix-terminology.md | 2 | |
| 3.4 | Solve Together pairs show all three tests explicitly: (a) A helps B, (b) B helps A, (c) shared time window creates multiplier | 4 | |
| 3.5 | ALL unique pairs of open constraints tested for Solve Together — not pre-filtered by judgment. Pairs involving Unknown-status constraints flagged as Conditional Solve Together (↔️❓) with rationale. | 3 | |
| 3.6 | Pattern Summary counts match the actual flags on individual constraint entries — no independent counts | 3 | |
| 3.7 | Every constraint number in the Pattern Summary matches the numbering in the constraint entries | 2 |
Mode 3 additional check:
| # | Check | Points | Pass/Fail |
|---|---|---|---|
| 3.8 | Solve Together testing not re-run on constraints already sequenced in builds — only new constraints tested | 0 (failure = reject output) |
Section 3 Total: ___ / 20
Section 4: Process 4 — Prioritization (20 points)
| # | Check | Points | Pass/Fail |
|---|---|---|---|
| 4.1 | Every open constraint assigned to a tier (1–4). Unknown-status constraints assigned a provisional tier with "pending status confirmation" noted. | 3 | |
| 4.2 | Tier assignments follow the logic: Tier 1 = Upstream + This Week + High Direction Impact; Tier 2 = dependent on Tier 1 or secondary; Tier 3 = downstream, monitor; Tier 4 = self-resolving | 4 | |
| 4.3 | Additional weight applied correctly: +1 for Resurfaced, Pattern Flag, High Opportunity Cost, Same Root Cause | 3 | |
| 4.4 | Rationale provided for each constraint's tier placement — not just the tier label | 3 | |
| 4.5 | Solve Together pairs noted in rationale for BOTH constraints in the pair. Conditional Solve Together pairs noted with the condition that must be resolved. | 3 | |
| 4.6 | Solve Together recommendation specifies: shared window, multiplier effect, combined initiative structure | 2 | |
| 4.7 | Constraint numbering is sequential and consistent with all other outputs | 2 |
Mode 3 additional check:
| # | Check | Points | Pass/Fail |
|---|---|---|---|
| 4.8 | Constraints in the active build sequence retain their build order — they are NOT re-tiered | 0 (failure = reject output) |
Section 4 Total: ___ / 20
Section 5: Process 5 — Session Prep (15 points)
| # | Check | Points | Pass/Fail |
|---|---|---|---|
| 5.1 | Prioritized Constraint List present and matches Process 4 output exactly (or Initiative Status table for Mode 3) | 2 | |
| 5.2 | Visual Matrix grid present with correct Category × Speed layout | 2 | |
| 5.3 | Visual Matrix flags match individual constraint entry flags — no mismatches | 3 | |
| 5.4 | Session Prep Brief includes: Recommended Focus, Why This One, Pre-Work, Questions to Ask, Capability Category, Pattern Watch | 3 | |
| 5.5 | Questions to Ask are specific to the recommended constraint — not generic advisory questions. Includes status check questions for every Unknown-status constraint. | 3 | |
| 5.6 | Pattern Watch references specific constraints by number, matching the prioritized list | 2 |
Mode 3 additional checks:
| # | Check | Points | Pass/Fail |
|---|---|---|---|
| 5.7 | Visual Matrix uses "Active Build" and "Build Sequence (Queue)" columns — not "This Week" and "Queue" | 0 (required, not scored separately) | |
| 5.8 | Session Prep recommended focus follows the project plan — what build is active, what's due | 0 (failure = reject output) | |
| 5.9 | New constraints positioned as watch items or secondary — not competing priorities against active builds | 0 (required, not scored separately) |
Section 5 Total: ___ / 15
Section 6: Cross-Output Validation (10 points)
| # | Check | Points | Pass/Fail |
|---|---|---|---|
| 6.1 | Constraint numbers (C1, C2, etc.) are consistent across ALL outputs: entries, pattern summary, prioritized list, visual matrix, session prep | 3 | |
| 6.2 | Every Solve Together pair (confirmed and conditional) referenced in any output matches the pair flagged on the individual entries | 3 | |
| 6.3 | No independently generated summary data — all counts, flags, and references are derived from the constraint entries | 2 | |
| 6.4 | If a mismatch exists between a summary and an entry, the entry is treated as source of truth | 2 |
Section 6 Total: ___ / 10
Scoring
| Section | Points Available | Score |
|---|---|---|
| 1. Client Context | 10 | |
| 2. Categorize & Tag | 25 | |
| 3. Pattern Detection | 20 | |
| 4. Prioritization | 20 | |
| 5. Session Prep | 15 | |
| 6. Cross-Output Validation | 10 | |
| TOTAL | 100 |
Pass threshold: 90 / 100
Mode 3 reject triggers: Checks 2.12, 3.8, 4.8, and 5.8 are not scored — they are pass/reject. If any of these fail, the output is rejected regardless of the point score. These prevent the matrix from overriding the project plan.
Required revise triggers (all modes): Checks 2.9, 2.10 are not scored — they are pass/revise. If either fails, the output must be revised before delivery regardless of the point score. These prevent misdiagnosis of behavioral constraints and inference from data gaps.
Common Failure Modes
| Failure | Where It Shows Up | How to Fix |
|---|---|---|
| Direction built from advisor inference, not client's stated goal | Section 1 | Ask: has this client stated a goal for the current year? If yes, that's Direction. If no, flag as inferred. |
| Behavioral constraint collapsed into system constraint | Section 2 | If the gap is what someone does rather than what doesn't exist, it's a behavioral constraint (🧠). Give it its own entry with a behavioral deployment approach. |
| Status inferred from absence of data | Section 2 | If the last evidence is stale and no other source fills the gap, status is Unknown. Do not diagnose from silence. Generate a status check question. |
| Retired terminology used | Sections 3, 4, 5 | Check every flag against 02-constraint-priority-matrix-terminology.md before finalizing |
| Constraint numbers drift across outputs | Section 6 | Number once in Process 2, carry through — never renumber mid-output |
| Solve Together pairs not fully tested | Section 3 | Run all three tests on every unique pair, document the test, not just the result. Flag pairs involving Unknown constraints as Conditional. |
| Generic opportunity cost | Section 2 | Rewrite with specifics: what's lost, how much, how fast |
| Pattern Summary doesn't match flags | Sections 3, 6 | Build summary LAST by pulling directly from flagged entries — never write independently |
| GPS Direction goal vague in "Blocks" field | Section 2 | Pull exact language from client's stated goal — "Jana owns operations" not "improve delegation" |
| Downstream constraint missing upstream reference | Section 2 | Every downstream must name its upstream cause by constraint number |
| Parallel root causes not identified | Sections 2, 3 | When two upstream constraints feed the same downstream symptoms through different mechanisms (structural vs. behavioral), flag both and note the parallel relationship |
| Re-prioritized sequenced builds (Mode 3) | Section 4 | If a project plan exists with dated builds, the matrix confirms status — it does not re-rank. New constraints are tiered relative to the active initiative. |
| Session prep competes with project plan (Mode 3) | Section 5 | Recommended focus follows the active build. New constraints are watch items, not competing priorities. |
| Wrong output format | All sections | Weekly output is markdown. HTML is for client-facing presentations only (via 06). |