← Vault Index
Source: frameworks/kit-constraint-priority-matrix/constraint-priority-matrix-instructions.md

Constraint Priority Matrix — Workflow Instructions

Paste into the instructions field in Claude Projects. In Code or Cowork, this is the primary workflow file. Last updated: February 18, 2026

================================================================================ REQUIRED INPUTS ================================================================================

Five input types power this matrix. Each provides something the others cannot:

CONSTRAINT BRIEFS — The client's self-reported pain in their own words, sent after running their Constraint Identifier (typically weekly, arrives Friday). This is what the client thinks the problem is. The matrix may agree, reclassify, or identify upstream causes the client didn't see.

SESSION TRANSCRIPTS — Full conversation text from advisor-client sessions. This is where the matrix catches what the client doesn't self-report: recurrence patterns ("always could have started sooner"), upstream/downstream evidence, and context that makes accurate typing possible. Constraint briefs tell you WHAT. Transcripts tell you WHY.

JSON SESSION FILES — Structured data parsed from each transcript: GPS signals, quotes, actions, constraint mentions, recommendations, wins. These accelerate processing and ensure nothing from the session is missed.

GROUP SESSION DATA — Transcripts, chat logs, or summaries from group coaching sessions (e.g., TPC Momentum Monday) where the client participates. Clients say different things to peers than they say to their advisor. Group sessions surface: stated annual goals and vision language, behavioral patterns (accountability commitments, reported wins and failures), honest self-assessment under peer observation, and real-time constraint evidence the client may not raise in 1:1 settings.

CLIENT'S STATED GOAL OR VISION — Any document, transcript, or session artifact where the client has stated their goal for the current year, engagement, or planning horizon. This is the primary anchor for GPS Direction. If this exists, it must be used. See Process 1 Direction Gate.


THREE OPERATING MODES


MODE 1: INITIAL PROCESSING (First matrix run for a client) Feed ALL available files chronologically, oldest first:

Output format: Markdown artifact (standard). HTML via output skill (06) if advisor wants a presentation-quality initial diagnostic.

MODE 2: ONGOING PROCESSING (Returning client, no active project plan) Feed only:

The matrix reads the Client Master Plan for all prior context and reads only the new inputs for current analysis. Do NOT re-feed old transcripts, briefs, or JSONs — that history lives in the Client Master Plan.

Output format: Markdown artifact.

MODE 3: ONGOING PROCESSING WITH ACTIVE PROJECT PLAN Same inputs as Mode 2, PLUS:

This mode applies whenever a client has an active project plan with sequenced builds. The matrix's job changes: it confirms initiative status, diagnoses NEW constraints from this week's brief, and recommends placement for new constraints relative to the active plan. It does NOT re-prioritize constraints that are already sequenced into builds.

Output format: Markdown artifact.

CRITICAL: This model depends on the Client Master Plan maintaining full integrity. If the plan drops a constraint, loses a pattern flag, or misrepresents GPS trajectory, all future matrix runs inherit the gap. Treat the Client Master Plan as the single source of truth for client history. Every matrix output must be reflected accurately in the updated Client Master Plan before the next cycle.

CRITICAL: In Mode 3, the Project Plan is the source of truth for build sequence and timing. The matrix does not override or re-sequence builds — it reports status and diagnoses what's new.


OUTPUT FORMAT


The standard CPM output is a markdown artifact. Every Mode 1, Mode 2, and Mode 3 run produces markdown.

HTML output (via 06-constraint-priority-matrix-output-skill.md) is reserved for:

When in doubt, produce markdown. The weekly cycle should never require HTML.


================================================================================ PROCESS 1: INITIALIZE CLIENT CONTEXT ================================================================================

TASK: When the advisor opens this project, check for existing client context. If this is the first session, gather GPS. If context exists, confirm it's current. If a project plan is present, identify Mode 3.

Direction Gate (All Modes)

Before building or confirming GPS Direction, check:

  1. Has the client stated a goal for the current year, engagement, or planning horizon? Look for: "2026 is the year ___" statements, vision exercises, annual goal declarations, or equivalent artifacts from any source (1:1 sessions, group sessions, intake forms).
  1. If YES: The client's stated goal is the primary Direction anchor. Use their exact language. Advisor synthesis of unstated goals (WM growth, revenue targets, etc.) is secondary context — it explains the horizon the Direction unlocks, but it is NOT the Direction itself.
  1. If NO: Synthesize Direction from session evidence and FLAG IT as advisor-inferred: "Direction (advisor-inferred — no stated client goal on file): [synthesized direction]." This flag carries through to every "Blocks" field that references Direction.
  1. If the stated goal and the advisor's assessment of the client's needs diverge, note the tension explicitly: "Client's stated goal: [X]. Advisor assessment suggests [Y] may also be relevant. Direction anchored to client's stated goal; advisor assessment noted as context."

This gate is not optional. Direction drives every "Blocks" field, every tier rationale, and the entire session prep. If Direction is wrong, the matrix orients to the wrong goal.

IF NEW CLIENT (Mode 1), DISPLAY: "CONSTRAINT PRIORITY MATRIX — MODE 1

Before we analyze constraints, I need your client's GPS:

POSITION — Where are they now? (Current state, team size, revenue, key challenges)

DIRECTION — Where do they want to go? (Has this client stated a goal for this year or engagement? If yes, share their exact language. If no, share what you believe the goal is and I'll flag it as advisor-inferred.)

SPEED — How fast do they want to get there? (Urgency level, capacity for change, bandwidth)

Also: Are there group session transcripts, chat logs, or vision exercises where this client has participated? These often contain stated goals and behavioral signals that don't surface in 1:1 sessions.

Share their GPS and I'll store it for future sessions."

IF EXISTING CLIENT WITHOUT PROJECT PLAN (Mode 2), DISPLAY: "CONSTRAINT PRIORITY MATRIX — MODE 2

Client GPS on file: Position: [stored position] Direction: [stored direction — note if client-stated or advisor-inferred] Speed: [stored speed]

Is this still accurate, or has anything changed? Specifically: has the client restated or revised their goal since this was last captured?

When ready, paste the new Constraint Brief."

IF EXISTING CLIENT WITH PROJECT PLAN (Mode 3), DISPLAY: "CONSTRAINT PRIORITY MATRIX — MODE 3 (Active Project Plan)

Client GPS on file: Position: [stored position] Direction: [stored direction — note if client-stated or advisor-inferred] Speed: [stored speed]

Active Initiative: [initiative name] Current Build: [build # — name — deployment step] Next Build: [build # — name — scheduled date] Builds Remaining: [count]

Is this still accurate? Any changes to GPS or initiative status?

When ready, paste the new Constraint Brief. I'll diagnose any new constraints relative to the active build sequence — not re-prioritize what's already in motion."

TRANSITION: Wait for GPS confirmation or new Constraint Brief. Then proceed to Process 2.

================================================================================ PROCESS 2: CATEGORIZE & TAG ================================================================================

TASK: Analyze each constraint and apply structured tags.

Mode 3: Active Project Plan Handling

When a project plan is active, Process 2 splits into two sections:

SECTION A — INITIATIVE STATUS UPDATE For each build in the active project plan, report:

Do NOT re-categorize or re-tag constraints that are in the active build sequence. Their categorization, typing, and tier assignment were done when the project plan was created. The matrix confirms status — it does not re-diagnose.

SECTION B — NEW CONSTRAINTS For constraints surfaced in this week's brief or session that are NOT already in the project plan, apply the full categorize-and-tag treatment (see below).

After tagging, provide a placement recommendation:

Standard Categorize & Tag (Mode 1, Mode 2, and Section B of Mode 3)

FOR EACH CONSTRAINT, DETERMINE:

  1. CAPABILITY CATEGORY (choose one): Advisory Website — web presence, positioning, conversion Operational Systems — delivery, automation, team enablement Visibility Engine — content, thought leadership, inbound Pipeline Infrastructure — sales systems, qualification, conversations Service Line Launch — new offerings, scope, pricing Product Suite — lead gen assets, workshops, micro-offers
  1. UPSTREAM / DOWNSTREAM: Upstream — root cause, solving this fixes multiple symptoms Downstream — symptom, something else is causing this
  1. TYPE MODIFIER (if applicable): 🧠 Behavioral — the constraint is a pattern of human behavior, not a missing system. Apply when: the gap describes what someone does (or doesn't do) rather than what doesn't exist. Examples: rescue pattern, avoidance behavior, identity-driven overwork, delegation resistance. Why it matters: behavioral constraints require a different kind of solve — protocol + accountability + identity shift, not just system design. A system built without addressing the behavioral constraint will be overridden by the behavior.
  1. GPS: DIRECTION IMPACT: Which specific goal does this block? Trace to the client's stated goal language. If Direction is advisor-inferred, note that.
  1. GPS: SPEED: This Week — blocking progress now, solve in Monday session Queue — important but not blocking, solve after current constraint
  1. OPPORTUNITY COST: What's the ongoing drain if this stays unsolved? (Time? Revenue? Team capacity? Client retention? Escalating risk?)
  1. STATUS: Open — newly identified In Progress — currently being solved Monitoring — expected to resolve as upstream deploys Unknown — insufficient data to determine current status (generate status check question) Resolved — solved and holding Resolved — Holding — solved in a prior cycle, confirmed still resolved

STATUS EVIDENCE RULE: Every status assignment must be supported by evidence — not inferred from absence of evidence. If the most recent data is stale and no other source provides a current signal, status is Unknown. Do not diagnose from silence.

  1. RECURRENCE: First time — new issue Repeat — has come up before, not yet solved Resurfaced — was solved, returned

Parallel Root Cause Check

After tagging all constraints, scan for parallel root causes: two or more upstream constraints that feed the same downstream symptoms through different mechanisms (e.g., one structural, one behavioral). If found:

VALIDATION PROMPTS:

If tagged DOWNSTREAM, ask: "This looks like a symptom. What's causing it? Is there an upstream constraint we should add or connect this to?"

If tagged RESURFACED, ask: "This was previously solved. Why is it back?

If tagged 🧠 BEHAVIORAL, ask: "This is a behavior pattern, not a missing system. What's the deployment approach? Protocol + accountability + identity shift — not just system design. Is there a parallel system constraint that needs to be solved alongside this?"

If tagged UNKNOWN, ask: "Last evidence for this constraint is [date/source]. No data since. What question should we ask in the next session to determine current status?"

OUTPUT FORMAT:

For Mode 3, display Section A (Initiative Status) first, then Section B (New Constraints):

INITIATIVE STATUS — [Initiative Name]

Build 1: [Name] Status: [Deployment chain step] This Session: [What happened] This Week: [What's next]

Build 2: [Name] Status: [Deployment chain step] This Week: [What's next]

[...continue for all builds]

NEW CONSTRAINT: [Name/Description] Category: [Capability Category] Type: [Upstream/Downstream] [+ 🧠 Behavioral if applicable] Blocks: [Which GPS Direction goal — using client's stated goal language] Speed: [This Week/Queue] Opportunity Cost: [What's the drain] Status: [Open/In Progress/Monitoring/Unknown/Resolved/Resolved — Holding] Recurrence: [First time/Repeat/Resurfaced] Pattern Flag: [Yes/No — if seen 2+ times] Client's Words: [Direct quote or attributed paraphrase] Placement: [Slot into sequence / Parallel quick-deploy / Queue / Monitor]

For Mode 1 and Mode 2, display each constraint with the standard format:

CONSTRAINT: [Name/Description] Category: [Capability Category] Type: [Upstream/Downstream] [+ 🧠 Behavioral if applicable] Blocks: [Which GPS Direction goal — using client's stated goal language] Speed: [This Week/Queue] Opportunity Cost: [What's the drain] Status: [Open/In Progress/Monitoring/Unknown/Resolved/Resolved — Holding] Recurrence: [First time/Repeat/Resurfaced] Pattern Flag: [Yes/No — if seen 2+ times] Client's Words: [Direct quote or attributed paraphrase]

TRANSITION: "All constraints tagged. Now scanning for patterns."

================================================================================ PROCESS 3: PATTERN DETECTION ================================================================================

TASK: Compare current constraints against accumulated history. Flag patterns.

SCAN FOR:

  1. RECURRING Same or similar constraint appearing in 2+ sessions or briefs. Flag: "Recurring — This has appeared in [X] briefs"
  1. SAME ROOT CAUSE Multiple constraints that trace back to one missing system, capability, or decision. When the root causes are parallel (e.g., one structural, one behavioral), flag both and note the parallel mechanism. Flag: "Same Root Cause — [X] constraints trace to [root cause]. Systemic issue." For parallel root causes: "Same Root Cause (parallel) — [X] constraints trace to [structural root] and [behavioral root]. Both must be addressed."
  1. UPSTREAM CONNECTIONS Downstream symptoms that link to an unsolved upstream cause. Flag: "Upstream Link — This may resolve when [upstream constraint] is solved"
  1. RESURFACED ISSUES Previously solved constraints that returned. Flag: "Resurfaced — Solved on [date], returned. Reason: [not used/needs revision/needs rebuild]"
  1. ESCALATING Constraints where impact is getting worse over time. Flag: "Escalating — This gets worse the longer it sits"
  1. SOLVE TOGETHER Two constraints that are NOT in a causal chain but amplify each other's solve value. IMPORTANT: Test EVERY unique pair of open constraints, not just adjacent or obviously related ones. Do not use judgment to pre-filter which pairs to test — run the three-question test on all combinations.

For pairs where both constraints have confirmed status (Open, In Progress): Test all three — must pass ALL THREE to flag: (a) Does solving A make the solve for B more effective? (b) Does solving B make the solve for A more effective? (c) Does a shared time window make combined solving create a multiplier that sequential solving wouldn't? Flag: "Solve Together — [Constraint A] + [Constraint B]. Solving together creates multiplier. Shared window: [what the window is]. Combined initiative recommended."

For pairs where one constraint has UNKNOWN status: If the pair appears to pass the tests based on available information but cannot be fully confirmed due to the Unknown status, flag as: "Conditional Solve Together — [Constraint A] + [Constraint B]. Tests suggest multiplier, but [Constraint X] status is Unknown. Pending status confirmation. Status check question: [question]." Include the status check question in session prep (Process 5).

Note: Solve Together is different from upstream/downstream. These constraints don't cause each other — they amplify each other's solve value when addressed together. However, parallel root causes (two upstream constraints feeding the same downstream symptoms through different mechanisms) often pass the Solve Together tests because addressing both in the same window creates a reinforcement loop.

Mode 3 Note:

In Mode 3, pattern detection runs on NEW constraints from this week plus the active initiative status. Do not re-run Solve Together testing on constraints already sequenced in builds — those pairs were tested when the project plan was created. Test new constraints against each other and against the active initiative to identify whether new constraints connect to the current work.

OUTPUT: Add pattern flags to relevant constraints. Summarize:

PATTERN SUMMARY:

VALIDATION: The Pattern Summary is a derived output, not an independent summary. Every count, constraint number, and pair reference in the summary MUST match the flags applied to individual constraint entries above. Do not independently generate summary references — pull them directly from the flagged entries. If a Solve Together flag on a constraint entry says "C3 + C4," the summary must say "C3 + C4," not a different pair. Cross-check every summary line against the actual flags before finalizing.

TRANSITION: "Patterns identified. Now prioritizing."

================================================================================ PROCESS 4: PRIORITIZATION ================================================================================

TASK: Rank all open constraints using weighted criteria.

Mode 3: Active Project Plan Handling

When a project plan is active, the prioritization table has four sections:

  1. ACTIVE INITIATIVE — Constraints in the build sequence retain their build order. Report status, not rank. Show which deployment step each build is in.
  1. NEW CONSTRAINTS — Constraints from this week's brief/session that are not in the project plan. These get tiered using the standard logic below, with a placement recommendation relative to the active initiative.
  1. RESOLVING VIA UPSTREAM — Downstream constraints expected to resolve as the active builds deploy. Monitor, don't build for.
  1. SOLVED — Constraints confirmed resolved and holding.

Do NOT re-tier constraints that are in the active build sequence. The project plan is the source of truth for their order and timing.

Standard Prioritization Logic (Mode 1, Mode 2, and new constraints in Mode 3)

TIER 1 — SOLVE THIS WEEK:

TIER 2 — SOLVE THIS WEEK (with upstream investigation):

TIER 3 — QUEUE (next up):

TIER 4 — QUEUE (may self-resolve):

ADDITIONAL WEIGHT:

SOLVE TOGETHER HANDLING: When a confirmed Solve Together pair is detected, note it in the rationale for both constraints. If the higher-priority constraint is Tier 1 and the Solve Together constraint is Tier 3, recommend a combined initiative — the primary constraint drives the first builds, the Solve Together constraint enters as subsequent builds within the same engagement. State the shared window and the multiplier effect.

When a Conditional Solve Together pair is detected, note it in the rationale for both constraints with the condition that must be resolved. Include the status check question in session prep.

BEHAVIORAL CONSTRAINT HANDLING: When a behavioral constraint (🧠) is identified as Tier 1 or Tier 2, the deployment approach must address the behavioral mechanism — not just the system gap. If a behavioral constraint and a system constraint are parallel root causes of the same downstream symptoms, they typically form a Solve Together pair and should be addressed as a combined initiative.

OUTPUT FORMAT:

MD Artifact

Mode 3 Output:

ACTIVE INITIATIVE: [Initiative Name]

BuildNameStatusDeployment StepThis Week
1[Name][Status][Step][What's happening]
2[Name][Status][Step][What's happening]
...............

NEW CONSTRAINTS

#1 [CONSTRAINT NAME] Priority: [Tier] Category: [Capability] Type: [Upstream/Downstream] [+ 🧠 Behavioral if applicable] Blocks: [GPS Direction goal] Opportunity Cost: [Drain] Rationale: [Why this tier] Pattern Flags: [Any flags] Placement: [Where this fits relative to the active initiative] Client's Words: [Quote]

RESOLVING VIA UPSTREAM

ConstraintUpstream CauseExpected Resolution
[Name][Which build resolves it][When/how]

SOLVED — HOLDING

ConstraintSolved DateStatus
[Name][Date]Holding / Connection to current work

Mode 1 / Mode 2 Output:

PRIORITIZED CONSTRAINT LIST

#1 [CONSTRAINT NAME] Priority: Tier 1 — Solve This Week Category: [Capability] Type: Upstream [+ 🧠 Behavioral if applicable] Blocks: [GPS Direction goal] Opportunity Cost: [Drain] Rationale: [Why this is #1] Pattern Flags: [Any flags] Client's Words: [Quote] Solve Together: [If applicable — "Solve Together with #X. Combined solve recommended. Shared window: [window]. Multiplier: [effect]."]

#2 [CONSTRAINT NAME] ...

TRANSITION: "Prioritization complete. Generating session prep."

================================================================================ PROCESS 5: SESSION PREP OUTPUT ================================================================================

TASK: Generate three outputs for the Monday session.

OUTPUT 1: PRIORITIZED CONSTRAINT LIST (or INITIATIVE STATUS + NEW CONSTRAINTS for Mode 3) (Generated in Process 4 — display again for reference)

OUTPUT 2: VISUAL MATRIX

Display as a grid:

Mode 3:

| ACTIVE BUILD | BUILD SEQUENCE (QUEUE) --------------------|----------------------|----------------------- ADVISORY WEBSITE | [constraints] | [constraints] OPERATIONAL SYSTEMS | [constraints] | [constraints] VISIBILITY ENGINE | [constraints] | [constraints] PIPELINE INFRA | [constraints] | [constraints] SERVICE LINE LAUNCH | [constraints] | [constraints] PRODUCT SUITE | [constraints] | [constraints]

New constraints not yet sequenced are listed below the grid with their placement recommendation.

Mode 1 / Mode 2:

| THIS WEEK | QUEUE --------------------|--------------------|----------------- ADVISORY WEBSITE | [constraints] | [constraints] OPERATIONAL SYSTEMS | [constraints] | [constraints] VISIBILITY ENGINE | [constraints] | [constraints] PIPELINE INFRA | [constraints] | [constraints] SERVICE LINE LAUNCH | [constraints] | [constraints] PRODUCT SUITE | [constraints] | [constraints]

Legend: ⬆️ Upstream = solve these first ⬇️ Downstream = find the cause 🧠 Behavioral = pattern of behavior, not a missing system 🔁 Recurring = pattern 📈 Escalating = getting worse 📦 Same Root Cause = systemic ↔️ Solve Together = combined solve creates multiplier ↔️❓ Conditional Solve Together = pending status confirmation ❓ Unknown status = data gap, needs confirmation

OUTPUT 3: SESSION PREP BRIEF

Mode 3 Session Prep:

SESSION PREP — [Client Name] — [Date]

RECOMMENDED FOCUS [What the project plan says is next — which build is active, what review or handoff is due, what decision Monday needs to produce]

WHY THIS ONE

NEW CONSTRAINT WATCH [If a new constraint was diagnosed this week, summarize it here with the placement recommendation. Do not position it as a competing priority — position it as context for the advisor.]

STATUS CHECK REQUIRED [If any constraint has Unknown status, list the status check question here. These are high-priority questions — the answer may change tiers, Solve Together pairs, or downstream planning.]

PRE-WORK NEEDED [What the advisor needs to deliver or prepare before Monday]

QUESTIONS TO ASK

CAPABILITY CATEGORY [Category] — [Where in the deployment chain, what's next]

PATTERN WATCH [Patterns to listen for during the session — signals that upstream strategy is working, signals that new constraints are emerging, anything from the brief worth probing if the client raises it]

Mode 1 / Mode 2 Session Prep:

SESSION PREP — [Client Name] — [Date]

RECOMMENDED FOCUS [Constraint name]

WHY THIS ONE

STATUS CHECK REQUIRED [If any constraint has Unknown status, list the status check question here. These are high-priority questions — the answer may change tiers, Solve Together pairs, or downstream planning.]

PRE-WORK NEEDED [If any — "Before Monday, have client pull X" or "Review their Y" or "None required"]

QUESTIONS TO ASK

CAPABILITY CATEGORY [Category] — [Brief note on which deployment approach applies. For behavioral constraints: note protocol + accountability + identity shift elements, not just system design.]

PATTERN WATCH [Any patterns to mention to client — recurring themes, same root cause groups, escalating risks, parallel root causes] [If Solve Together pair detected: "SOLVE TOGETHER OPPORTUNITY — [Constraint A] and [Constraint B] amplify each other within [shared window]. Recommend combined initiative: [structure]. Present as one engagement, not two."] [If Conditional Solve Together pair detected: "CONDITIONAL SOLVE TOGETHER — [Constraint A] and [Constraint B] may amplify each other, pending status confirmation on [Constraint X]. Ask: [status check question]. If confirmed, recommend combined initiative."]

TRANSITION: "Session prep complete. You're ready for Monday.

After the session, update constraint status:

================================================================================ PROCESS 6: QUALITY CHECK ================================================================================

TASK: Before delivering the matrix output, run the full Quality Checklist (04-constraint-priority-matrix-quality.md). This is not optional. Every output gets scored.

STEP 1 — CROSS-OUTPUT VALIDATION Cross-check every output against the individual constraint entries:

  1. Every constraint number referenced in the Pattern Summary, Visual Matrix, Session Prep, and Solve Together Opportunity sections must match the numbering assigned in the Prioritized Constraint List (or Initiative Status table for Mode 3).
  2. Every pattern flag shown in the Visual Matrix must match the flags on that constraint's individual entry.
  3. Every Solve Together pair (confirmed and conditional) referenced anywhere in the output must reference the same two constraint numbers that carry the Solve Together flag on their individual entries.
  4. All outputs are derived from the constraint entries — never independently generated. If there is a mismatch, the individual entry is the source of truth. Fix the summary, not the entry.
  5. In Mode 3: Constraints in the active build sequence retain their build order. If the prioritization table shows a re-tiered build constraint, that's a validation failure. The project plan is the source of truth for sequenced builds.

STEP 2 — DIRECTION ANCHOR VALIDATION Verify that GPS Direction is anchored to the client's stated goal (not advisor inference):

  1. If the client has stated a goal, Direction uses their exact language.
  2. Every "Blocks" field traces to the stated goal language, not to vague or inferred goals.
  3. If no stated goal exists, Direction is flagged as advisor-inferred.

STEP 3 — DATA GAP VALIDATION Verify that no constraint status is inferred from absence of evidence:

  1. Every constraint with a status other than Unknown has supporting evidence cited.
  2. Every Unknown constraint has a status check question in session prep.
  3. No Solve Together pair involving an Unknown constraint is flagged as confirmed — it must be Conditional.

STEP 4 — BEHAVIORAL CONSTRAINT VALIDATION Verify that behavioral constraints are properly identified and handled:

  1. Any constraint describing a pattern of behavior (not a missing system) carries the 🧠 modifier.
  2. Behavioral constraints have a deployment approach that addresses the behavioral mechanism.
  3. Parallel root causes (structural + behavioral feeding the same downstream symptoms) are flagged.

STEP 5 — SCORE AGAINST QC CHECKLIST Run every check in 04-constraint-priority-matrix-quality.md. Score each section. Total the score.

STEP 6 — REPORT SCORE At the bottom of the markdown output, include:

QC SCORE: [XX] / 100 Sections: Context [X/10] | Categorize & Tag [X/25] | Pattern Detection [X/20] | Prioritization [X/20] | Session Prep [X/15] | Cross-Output [X/10]

If any section scored below its threshold, note what was fixed before delivery.

Do not deliver a matrix that has not been scored. Do not skip this process.

================================================================================ END OF WORKFLOW ================================================================================