← Vault Index
Source: frameworks/kit-builder/05-kit-builder-output-skill.md

05 — OUTPUT SKILL: Kit Builder

Scope

Produces: A complete kit directory under content/frameworks/[kit-name]/. Consumer: Kathryn (and Claude, who will run the resulting kit). Output: Numbered files appropriate to the kit type — 6 files standard, more or fewer as justified.


Required Inputs

  1. Golden example — the finished deliverable (Mode 1) or existing protocol (Mode 3)
  2. Kit name — lowercase, hyphenated (becomes directory name and file prefixes)
  3. One-line description — what this kit produces
  4. Audience — who the output is for
  5. Output format — HTML, markdown, or directory structure

Mode 1: Build Kit from Golden Example

Step 0: Read Existing Kits of Similar Type

Before analyzing anything, read existing vault kits that produce similar deliverables. This is not optional. The kit builder's output quality is directly proportional to how many reference kits were read.

Extract: file count, QC format, golden example format, what patterns are shared, what's unique to each.

Step 1: Analyze the Golden Example

Read the golden example thoroughly. Extract:

  1. Structure — What sections does it have? What order? What's required vs optional?
  2. Inputs — What data was needed to produce this? (Names, dates, source documents, etc.)
  3. Terminology — What terms have specific meaning? What terms are forbidden?
  4. Quality patterns — What makes this example good? What could go wrong?
  5. Content rules — What tone? What's included vs excluded? Any filtering?
  6. Visual/format patterns — (For HTML) CSS classes, component types, responsive behavior
  7. Audience boundary — Does the output transform internal data for an external audience? If yes, content filtering (in vs. out) is needed.
  8. External QC needs — Does the output contain prose, client-facing language, or brand voice? If yes, identify which external QC files apply (copy-qc.md, sentence-editor.md, brand QC).

Present this analysis to Kathryn before proceeding. Say: "Here's what I extracted. Does this match your intent, or am I missing something?"

Step 2: Determine Kit Type and File Count

Answer the questions in the Kit Complexity Decision Guide (01-context.md). Decide:

DecisionDefaultOverride When
File count6 (standard)Complexity demands more, or simplicity allows fewer
QC format100-point, 90 thresholdPass/fail better for disqualifying errors, checklist for simple outputs, interactive HTML for frequent use
Golden example formatMatch the output formatStructural reference when live examples exist elsewhere; placeholder when awaiting first deployment
Separate instructions fileNoWorkflow has multiple multi-step processes
Consultant methodologyNoKit involves facilitating a human session
Input manifestNoInputs are complex enough to warrant separation from context
Cross-document QCNoOutput feeds downstream documents
Split golden examplesNoDistinct production tracks exist

Present these decisions to Kathryn before building files.

Step 3: Determine Operating Modes

Every kit has at minimum:

Additional modes to consider:

Mode PatternWhen to AddExamples
Update modeOutput is a living document that changes over timeBlueprint (update after session), Master Plan (update after every session)
Two-beat modelKit runs at two distinct points in a larger workflowChange Communication (Beat 1 at scaffolding, Beat 2 at implementation)
Multi-mode based on prior stateDifferent inputs and processing depending on what exists alreadyCPM (Mode 1 no history, Mode 2 has master plan, Mode 3 has active project plan)
Maturity progressionKit runs differently as the engagement maturesNew Client Kit (Day 0, Session 1, Maturity Upgrade)
Track-basedDifferent production paths for human vs. AI executionRecruiting kits (consultant process, agent process)

Ask Kathryn: "Does this kit need an Update mode, or is each run independent?"

Step 4: Build File 00 — Start Here

Use this structure:

# 00 — START HERE: [Kit Name]

*This is the setup and orientation document for the [Kit Name]. Read this to understand what it is, what it does, what files it needs, and how to use it.*

---

## What This Is
[2-3 sentences: what it produces, for whom, in what format]
**Audience:** [who reads/uses the output]
**Format:** [HTML / Markdown / Directory structure]
**Lifecycle:** [One-shot / Living document / Series]

---

## Operating Modes
[Table: Mode | Trigger | What It Produces]

### Mode 1 — [Create/Build/Generate]
[When to run, what inputs, what it produces]

### Mode 2 — Improve This Kit
The self-improvement loop. After running this kit:
1. Did I change anything in the output by hand? → Update golden example + output skill
2. Did QC miss something I caught? → Update quality checklist
3. Should the kit do something it doesn't? → Update output skill

[Additional modes if applicable]

---

## What This Does NOT Do
[3-5 explicit scope boundaries]

---

## File Inventory
[Table: # | File | What It Is]

---

## Relationship to Other Kits
[How this kit connects to others: derives from, references, coordinates with]

---

## File Location
[Directory path]

Step 5: Build File 01 — Context

Use this structure:

# 01 — CONTEXT: [Kit Name]

*Input definitions, validation rules, and what each mode requires.*

---

## Mode 1 Inputs
[Table: Input | Required | Example | Used For]

### Validation Rules — Mode 1
[Numbered list of rules]

---

## Mode 2 Inputs (Improve)
[Table: Input | Required | Source | Used For]

### Validation Rules — Mode 2
[Numbered list]

---

## Input Priority Hierarchy
[What wins when inputs conflict]

Additional sections to include when applicable:

Step 6: Build File 02 — Terminology

Use this structure:

# 02 — TERMINOLOGY: [Kit Name]

[One-line: what vocabulary this file locks]

---

## Terms Used in This Kit
[Table: Term | Meaning | NOT This]

---

## Visual/Format States (if applicable)
[Table: State | Visual Treatment | What It Means]

---

## Forbidden Terms
[List of terms that must never appear in the output, with why]

Extract terms from the golden example. Any word used in a specific, non-obvious way gets a definition. Any internal jargon that must not appear in the output gets a "forbidden" entry.

Check existing vault kits for shared terminology. If terms overlap (e.g., GPS, constraint types, deploy chain stages), use the same definitions. Don't redefine existing vocabulary.

Step 7: Build File 03 — Golden Example

Copy the golden example file into the kit directory as 03-[kit-name]-golden-example.[ext].

Golden example formats observed in the vault:

FormatWhen UsedExamples
HTML (fully populated)Output is HTMLBlueprint, CPM, Master Plan, Project Plan, Offer Page, Narratives
Markdown (narrative)Golden example teaches through a case study, not codeChange Communication
Structural referenceLive examples exist in other reposNew Client Kit (points to aos-client-rc and aos-client-jb)
Placeholder awaiting deploymentKit is new, no production run has happened yetAll 5 Recruiting kits
Split (03a/03b)Distinct production tracksRecruiting kits (consultant vs. agent)
Golden examples in client repoOutput is client-specific and the kit references examples thereClient Email, Session Recap

Verify: If fully populated, no {{PLACEHOLDER}} tags, no blank sections. Templates belong in the output skill. If placeholder, explicitly state: "Golden example needed. The first production run that passes QC should become this file via Mode 2."

Step 8: Build File 04 — Quality

Choose the QC format based on the kit's needs (see QC Format Decision Guide in 01-context.md).

For 100-point weighted (most common):

# 04 — QUALITY: [Kit Name]

**Pass threshold:** 90 / 100
**When to run:** [After every create/update before sharing]

---

## [Category 1] (N points)
[Table: # | Check | Points]

[Continue for all categories — points must total 100]

---

## Common Failure Modes
[Table: Failure | What Happens | How to Fix]

For pass/fail with blocking failures:

# 04 — QUALITY: [Kit Name]

**When to run:** [After every create/update before sharing]

---

## [Section] Checks
[Checklist items]

## Blocking Failures
[Errors that are disqualifying regardless of other quality]

---

## Common Failure Modes
[Table: Failure | What Happens | How to Fix]

For checklist with ship criteria:

# 04 — QUALITY: [Kit Name]

## [Check Category]
[Numbered checks]

## Ship Criteria
[What must be true before presenting to Kathryn]

How to derive quality checks from the golden example:

  1. Look at what makes the golden example good → make checks that verify those qualities
  2. Think about what could go wrong → make checks that catch those failures
  3. Look at the content rules in file 01 → make checks that enforce them
  4. Look at the forbidden terms in file 02 → make checks that scan for them
  5. If external QC files apply → add a mandatory QC pass step (e.g., "Run copy-qc.md. Fix all P1 and P2 violations.")
  6. If the kit has a gap protocol → add a binary pre-build gate (Gate 1) before the weighted QC (Gate 2)

Common failure modes section is required in every QC format — even if empty initially. This is where Mode 2 improvements land.

Step 9: Build File 05 — Output Skill

Use this structure:

# 05 — OUTPUT SKILL: [Kit Name]

## Scope
**Produces:** [what]
**Audience:** [who]
**Filename:** [naming pattern]
**Lifecycle:** [one-shot / living / series]

---

## Required Inputs
[Numbered list — restated from file 01 for standalone readability]

---

## Content Rules
[Numbered list of specific, enforceable rules]

---

## [Component Templates / Section Specifications]
[For HTML: named component snippets with CSS classes]
[For markdown: section-by-section format requirements]

---

## Full Template
[The complete structural skeleton with {{PLACEHOLDER}} tags]
[This is where templates belong — NOT in the golden example]

---

## Delivery Checklist
[Pre-ship verification — the final gate before sharing]

The output skill must be standalone-readable. Someone reading only file 05 should be able to produce the deliverable without referencing files 00-02. Restate scope and inputs. Don't just say "see file 01."

If external QC dependencies exist, document them in the output skill: which files to read, when to run them, what to fix before delivering.

Step 10: Build Additional Files (If Kit Type Requires)

Additional FileContent
InstructionsMulti-process workflow with detailed steps per process. Use when the output skill would exceed ~200 lines of procedural content.
Input manifestStructured intake definitions separated from context/methodology. Use when inputs are complex and methodology deserves its own file.
Consultant methodologyHow to facilitate the human session that produces the inputs. Covers: session structure, timing, phases, guardrails, after-session steps.
Full-document QCCross-document validation checklist covering this kit's output AND downstream documents that depend on it.
Process agentAI-assisted workflow specifications for the agent production track.

Step 11: Self-QC

Run the Kit Builder's own quality checklist (this kit's file 04) against the kit you just produced. Score it. If below 90, fix the issues before presenting to Kathryn.

Step 12: Present for Review

Show Kathryn:

  1. The file inventory (what was created and why that file count)
  2. The QC score
  3. Any decisions you made that she should validate (especially: kit type, QC format, operating modes, terminology choices)

Testing Disciplines

These apply to every kit the builder produces — bake them into the kit's start-here (Mode 1 instructions) and output skill (delivery checklist).

Confirm Understanding Before Executing

Before any production run, the kit must confirm its understanding of the inputs and plan:

"Here's what I'm going to do. Here's what I'm working from. Does this match your intent, or am I missing something?"

Do not start production until the advisor confirms. This prevents 10-minute runs in the wrong direction. A wrong output that took 10 minutes is worse than a 30-second confirmation that catches the misunderstanding upfront.

Don't Interrupt the Run

When testing a new or updated kit, let the kit produce its full output before making corrections. Do not intervene mid-process.

Why: Mid-process corrections mask real gaps in the kit's instructions. If you fix a problem while the kit is running, the kit doesn't learn — and the same problem will recur next time. Let it finish, compare to the golden example, then fix the kit files (process, context, quality) so the gap doesn't exist on the next run.

How to apply: After every first run of a new kit or a Mode 2 update:

  1. Run the full production without interruption
  2. Compare output to golden example
  3. Document every gap between actual and expected
  4. Fix the kit files — not just the output
  5. Re-run and verify the gaps are closed

Mode 2: Improve Existing Kit

Step 1: Identify What Triggered the Improvement

TriggerFiles to Update
Output was manually changed03 (golden example) + 05 (output skill) + possibly 04 (quality)
QC missed something04 (quality) — add the missing check + add to common failure modes
Process was wrong or incomplete05 (output skill)
New term needs locking02 (terminology)
Scope needs clarifying00 (start-here) — update modes or "does NOT do"
Inputs changed01 (context)
Kit type was wrongMultiple files — add or remove files as needed, update 00 file inventory

Step 2: Read Current Kit Files

Read ALL files of the kit being improved. Understand the current state before making changes.

Step 3: Make the Changes

Update the relevant files. For each change:

Step 4: Re-run QC

Score the updated kit against its own quality checklist (file 04). Confirm the improvement didn't break something else.


Mode 3: Convert Protocol to Kit

Step 1: Read the Protocol Document

Read the entire protocol. Extract:

  1. What deliverable does it produce?
  2. What inputs does it need?
  3. What steps does it follow?
  4. What vocabulary does it use?
  5. What quality expectations are stated or implied?
  6. Does it involve facilitating a human session? → Consultant methodology file needed

Step 2: Follow Mode 1 Steps 0-12

Use the protocol as the primary source instead of a golden example. Key differences:

Step 3: Archive or Link the Protocol

Once the kit is built, the original protocol is no longer the source of truth — the kit is. Either:


Delivery Checklist

Before presenting the kit to Kathryn:

  1. [ ] Existing kits of similar type were read before building (Step 0 completed)
  2. [ ] Kit type decision is justified — file count matches complexity
  3. [ ] All files created with correct naming convention
  4. [ ] Directory exists at content/frameworks/[kit-name]/
  5. [ ] Start-here has all required sections including "does NOT do" and self-improvement loop
  6. [ ] Context has inputs, validation rules, and priority hierarchy
  7. [ ] Terminology has locked terms, forbidden terms, and (if applicable) visual states
  8. [ ] Golden example exists in appropriate format (populated, reference, or justified placeholder)
  9. [ ] Quality gate format matches the kit's needs — not defaulted to 100-point without justification
  10. [ ] Quality gate has "Common Failure Modes" section
  11. [ ] Output skill is standalone-readable (restates scope and inputs)
  12. [ ] Output skill has content rules, templates/specs, and delivery checklist
  13. [ ] External QC dependencies documented if applicable
  14. [ ] Self-QC scored 90+ against Kit Builder quality checklist (file 04 of THIS kit)
  15. [ ] No duplication of existing vault kit logic — references used instead
  16. [ ] Kit name, file names, and directory follow vault conventions
  17. [ ] Kit's start-here includes "confirm understanding before executing" instruction in Mode 1
  18. [ ] Kit's output skill delivery checklist includes "don't interrupt first run" testing note