01 — CONTEXT: Kit Builder
Input definitions, validation rules, and what each mode requires.
Mode 1 Inputs — Build Kit from Golden Example
| Input | Required | Example | Used For |
|---|---|---|---|
| Golden example file | Yes | A finished HTML blueprint, a completed email, a populated master plan | Becomes file 03. Analyzed to extract structure, terminology, quality patterns, content rules |
| Kit name | Yes | blueprint, session-recap, sop | Directory name, file prefixes (00-blueprint-start-here.md) |
| One-line description | Yes | "Client-facing progress dashboard for a specific initiative" | Start-here opening, orientation for anyone reading the kit |
| Audience | Yes | "The client", "The advisor", "Internal team" | Drives terminology decisions and tone rules |
| Output format | Yes | HTML, Markdown, Directory structure | Determines golden example file extension, output skill template type |
| Operating modes | No | "Create new" and "Update after session" | If not provided, Claude infers from the golden example (minimum: Create mode) |
| Input list | No | "Project plan, reference data, CPM" | If not provided, Claude infers from the golden example content |
| Relationship to other kits | No | "Derived from project plan, feeds into client email" | If not provided, Claude leaves this section for Kathryn to fill |
Validation Rules — Mode 1
- The golden example must be a finished deliverable. Not a draft, not a sketch, not a description of what you want. If it's not done, finish it first — then come back. Exception: if the kit is new and no deployment has happened yet, the golden example can be a placeholder with structural specifications (see Recruiting kits for this pattern).
- Kit name is lowercase, hyphenated.
session-recap, notSession RecaporsessionRecap. This becomes the directory name and all file prefixes. - Golden examples can be singular or split. One golden example is standard. Split into 03a/03b when the kit has distinct production tracks (e.g., consultant process vs. agent process). Multiple golden examples can inform quality, but each must represent a distinct production path — not just different clients.
- The golden example IS the standard. Everything in the kit — quality checks, output skill, terminology — is derived from this example. If the example has a problem, fix the example first.
- Determine kit type before building. Read existing kits of similar complexity. Does this kit need a separate instructions file? A consultant methodology? An input manifest? A cross-document QC? Decide file count before writing any files. (See Kit Types in 00-start-here.)
Mode 2 Inputs — Improve Existing Kit
| Input | Required | Source | Used For |
|---|---|---|---|
| Kit location | Yes | content/frameworks/[kit-name]/ | Which kit to improve |
| Trigger | Yes | One of: QC failure, manual output changes, system suggestion | Determines which files to update |
| Updated output | If applicable | The corrected or improved deliverable | Replaces file 03 (golden example) |
| QC findings | If applicable | Specific checklist items that failed or were missing | Updates file 04 (quality) |
| Process changes | If applicable | Steps that were wrong, missing, or in wrong order | Updates file 05 (output skill) |
Validation Rules — Mode 2
- Always read the current kit files before making changes. Never update blindly — understand what's there first.
- Changes propagate. If you update the golden example, check whether the output skill and quality checklist still match. A golden example change often requires output skill and quality updates too.
- Never remove a quality check without a reason. Quality checks accumulate — they represent lessons learned. If a check seems unnecessary, it probably caught something once.
- Document what changed and why. Add a brief note at the bottom of the updated file:
for HTML files, or a## Change Logsection for markdown files.
Mode 3 Inputs — Convert Protocol to Kit
| Input | Required | Source | Used For |
|---|---|---|---|
| Protocol document | Yes | khb-aos/skills/, a SOP, a written procedure | Analyzed to extract structure, steps, terminology, quality expectations |
| Kit name | Yes | Same rules as Mode 1 | Directory name, file prefixes |
| One-line description | Yes | Same rules as Mode 1 | Start-here opening |
| Example output | No | A deliverable produced by following the protocol | If available, becomes the golden example. If not, the kit is produced without file 03 (marked as needed) |
Validation Rules — Mode 3
- A protocol without an example output produces an incomplete kit. File 03 (golden example) will be a placeholder noting that the first production run should become the golden example.
- Protocols are informal — kits are precise. Expect to tighten language, resolve ambiguities, and fill gaps during conversion. Flag anything unclear for Kathryn rather than guessing.
- The protocol's steps become the output skill, not a copy-paste. Restructure into the numbered-file format. The protocol itself can be archived or kept as supplementary reference.
Input Priority Hierarchy
When inputs conflict or are ambiguous:
- The golden example wins for structure, format, and content patterns
- Kathryn's description wins for audience, purpose, and scope
- Existing vault conventions win for file naming, directory structure, and kit format
- Other kits win for cross-kit terminology and relationship mapping
Kit Complexity Decision Guide
Before building, answer these questions to determine the right kit type:
| Question | If Yes → Add |
|---|---|
| Does the workflow have multiple multi-step processes that would bloat the output skill? | Separate instructions file |
| Are the inputs complex enough that methodology and input definitions should live apart? | Separate input manifest (e.g., Master Plan's 07-input-manifest.md) |
| Does the kit involve facilitating a human session, not just producing a deliverable? | Consultant methodology file (e.g., Change Communication, Recruiting kits) |
| Does the kit's output feed downstream documents that need cross-validation? | Full-document QC file (e.g., Master Plan's cross-doc QC) |
| Does the kit have distinct production tracks (human vs. AI, or different deliverable types)? | Split golden examples (03a/03b) |
| Does the kit need to reference external QC files (copy-qc.md, sentence editor, etc.)? | Document these dependencies in the output skill and quality checklist |
| Is the deliverable simple and the production rules fit in 2-4 files? | Stay lightweight — don't force 6 files |
The default is 6 files. Add or subtract only with justification.
QC Format Decision Guide
Not all kits use the same QC format. Match the format to the kit's needs:
| Format | When to Use | Examples |
|---|---|---|
| 100-point weighted, 90+ threshold | Kits with many quality dimensions that need relative weighting | Blueprint, Project Plan, CPM, Recruiting kits |
| Pass/fail with blocking failures | Kits where certain errors are disqualifying regardless of overall quality | Change Communication |
| Checklist with ship criteria | Kits where the output is simpler and binary quality checks suffice | Client Email, Session Recap |
| Interactive HTML | Kits where QC is run frequently and benefits from a UI | Offer Page |
The kit builder does not force a QC format. Analyze the golden example and determine which format fits.