Context — Field Guide Inputs and Required Decisions
The Gap Protocol
This is the most important section in this kit. Read it before every build.
A gap is any editorial decision not yet made or any required input not yet provided. Gaps are not problems to solve by guessing — they are signals to stop.
When you identify a gap:
- Record it
- Stop the build
- Present it to the consultant
- Wait for the consultant to make the decision
- Proceed only after every gap is resolved
What you must never do:
- Fill a gap from the golden example (the golden example is a styling reference, not a content source)
- Fill a gap by inferring from the book ("this seems like the most important idea so...")
- Fill a gap from another field guide built for a different book
- Make an editorial decision without the consultant
Why this matters: A field guide with invented editorial choices — ideas the consultant didn't select, exercises they didn't design, mental model they didn't frame — won't represent their expertise. The reader senses it. It doesn't book calls.
Routing Check — Run This First
Has the consultant completed the book analysis (File 07)?
- Yes — 5 ideas selected, deliverables named, interactives designed, mental model defined, prompts engineered → proceed through the decisions below to confirm everything is locked
- No — book not read, ideas not selected, or editorial decisions not made → stop here. Go to
07-field-guide-consultant-methodology.mdand complete the analysis first. A build without editorial decisions is not a build — it's a guess.
Decisions That Must Be Made Before Building
Every field guide requires six editorial decisions before the build skill is opened. These decisions come from the consultant — not from the kit, not from the golden example, not from an agent guessing.
Decision 1: Which 5 Ideas
The book will have 8–15 candidate ideas. Five get selected. The selection criteria:
- Recognizable to the audience. The reader should have heard of or felt this idea even if they haven't named it.
- Produces a deliverable. Each idea must result in something the reader can use — a filter, a blueprint, a map, a model, a scorecard. If the idea produces insight but not an artifact, it's content for a blog post, not a field guide session.
- Builds on the previous. The 5 ideas should form a sequence where each session's output feeds the next. Session 01's output becomes input for Session 02. If the ideas are independent, the field guide is a list — not a system.
- Spans the book's argument. The 5 ideas should cover the book's full arc, not cluster in one section.
- At least 3 are interactive. Three of the five concept slides need exercises the reader does in the guide. Two can be static reference visuals. If fewer than 3 are interactive, the field guide doesn't differentiate from a PDF.
Decision 2: Which Ideas Get Interactives
For each of the 5 ideas, determine whether the concept slide is interactive or static.
Interactive when the idea has an assessment moment — the reader rates, checks, scores, or sorts something and sees a result. The interactive creates a reframe BEFORE the AI session. Examples: service scalability checklist, dependency ladder rating, independence scorecard sliders.
Static when the AI session itself is the interactive for that idea — the reader's input to the prompt IS the work. The concept slide shows a framework or reference visual that prepares them for the prompt. Examples: service architecture process flow, pricing comparison grid.
Decision 3: What Each Session Produces
Name the deliverable for each session. These names appear in the table of contents, in the callout boxes, and in the campaign copy. They must be concrete nouns, not abstract concepts.
Good: Specialization Filter, Service Blueprint, Dependency Map, Pricing Architecture, Independence Scorecard. Bad: Clarity, Understanding, Framework, Assessment, Plan.
Decision 4: The Mental Model
Every field guide has a mental model slide (Slide 3) that frames the book's core argument visually before the exercises begin. Define:
- The two-column comparison. What are the two paths/states the book contrasts? (Built to Sell: Generalist Practice vs. Specialized Practice)
- The traits for each side. 5 traits per column that make the contrast concrete.
- The outcome statement. What each path scales with. (Hours vs. Presence)
- The hard truth. One sentence in a dark callout box that names the uncomfortable insight.
Decision 5: Bridge Destination
The bridge slide (Slide 14) has a single CTA on a dark background — no multiple offer cards, no choices. One button.
Default: "Book a Systems Diagnostic" → Calendly link. If a different next step exists (intensive, membership), substitute that single CTA.
Decision 6: Campaign Keyword
A single word, all caps, that commenters type on LinkedIn to request the field guide. The keyword should signal identity — commenting it says something about the person, not just that they want a freebie.
Good: BUILT (signals "I do the work"), PROFIT, TRACTION Bad: GUIDE, FREE, SEND
Required Inputs by Section
Slide 1: Intro
| Input | Required | Source |
|---|---|---|
| Book title | Yes | The book |
| Attribution line | Yes | "Based on the book by [Author Name]" — always this format |
| 5 idea names | Yes | Decision 1 |
| 5 deliverable names | Yes | Decision 3 |
| Thesis paragraph(s) | Yes | Consultant writes — frames the book's argument and what these sessions do about it |
Layout: Full width using container-wide. Title, subtitle, thesis in a book-frame callout, table of contents below. No columns.
Never position the field guide as the author's product. The attribution line credits the book. The field guide header says "Advisory OS | Field Guide" — not the book title or author name.
Slide 2: Diagnostic
| Input | Required | Source |
|---|---|---|
| 3 diagnostic questions | Yes | Derived from the 5 ideas — each question maps to a constraint area |
| Scoring matrix | Yes | Each answer distributes points across the 5 ideas |
| 5 result messages | Yes | One per idea — frames which idea "will hit hardest" and why |
Layout: Full width using container-wide. Questions and options must render on single lines.
The diagnostic is motivation, not routing. The result tells the reader where the guide will hit hardest — it does NOT tell them to skip ahead. The intro text must say "start with Idea 01 and work forward."
Slide 3: Mental Model
| Input | Required | Source |
|---|---|---|
| Two-column comparison | Yes | Decision 4 |
| 5 traits per column | Yes | Decision 4 |
| Outcome statements | Yes | Decision 4 |
| Hard truth sentence | Yes | Decision 4 |
Layout: Text in container, comparison cards in container-wide. Two-column grid that stacks on mobile. Dark insight callout box below with gold top bar.
Slides 4–13: Idea Pairs
Each idea pair needs:
| Input | Required | Source |
|---|---|---|
| Concept slide eyebrow | Yes | "Idea 0X — [Theme Word]" |
| Concept slide title | Yes | The idea name — evocative, not generic |
| Concept slide body text | Yes | 1 paragraph for interactive slides, 2 max for static |
| Frame header instruction | Yes | Merged into gold header: "Your Exercise — [what to do and what it means]" |
| Interactive OR static visual | Yes | Decision 2 determines which |
| Session slide eyebrow | Yes | "Idea 0X — AI Working Session" |
| Session slide title | Yes | "Build Your [Deliverable]" or "Draft Your [Deliverable]" |
| Session slide body text | Yes | What this session does and what the output looks like |
| Session slide callout | Yes | "Output: [specific deliverable description]" |
| AI workflow prompt | Yes | See Prompt Requirements below |
Layout: Text in container, interactive/prompt in container-wide below. Full width. Session slides get .off-white class for visual rhythm.
Slide 14: Bridge
| Input | Required | Source |
|---|---|---|
| Bridge headline | Yes | Frames the gap between diagnosis and implementation |
| Bridge body text | Yes | Names the 5 deliverables they just produced, names the next constraint |
| CTA button | Yes | Single centered CTA — link to booking page |
| CTA subtext | Yes | What happens when they click — "60 minutes" / etc. |
Layout: Dark background (#1a1a1a) using .section.dark-bridge. Cream text. Gold eyebrow and accent bar. Single centered CTA. No offer cards.
Prompt Requirements
Every AI working session prompt follows the TASK/INTRODUCTION/STEP workflow format. This produces a guided session where the AI facilitates step by step with confirmation checkpoints — not a single-dump output.
Prompt Structure
# TASK
You are a [specific role], and your role is to guide me step-by-step
through [specific goal] using [Author]'s [Book Title] [framework name].
At each stage, ask for context, complete the action, and confirm we're
ready to move forward.
# INTRODUCTION
// Introduce yourself as my [role name]
// List the steps:
// 1) [Step 1 name]
// 2) [Step 2 name]
// 3) [Step 3 name]
// 4) [Step 4 name]
// 5) [Step 5 name]
// Ask if I'd like a brief overview of [concept], or if I want to jump in
# STEP 1: [STEP NAME IN CAPS]
// Introduction: [What we'll do in this step]
// Context: [What the AI asks the reader to provide]
// Action + Confirmation: [What the AI produces + confirmation question
that pushes for honesty]
[Repeat for Steps 2–5]
[Instruction line]
Prompt Design Rules
- Role assignment is mandatory. Every #TASK starts with "You are a [specific role]."
- 5 steps per prompt. Each step has Introduction, Context (where needed), and Action + Confirmation.
- Confirmation questions push for honesty. "Does this feel honest?" / "Am I being generous with myself?" Not "Does this look good?"
- Prompt 1 needs no handoff. Instruction: "Paste this entire prompt into Claude or your preferred AI. The AI will guide you through each step."
- Prompts 2–5 use graceful handoffs. "If I completed Session 01 (Specialization Filter), ask me to share the result. If not, ask me to describe my core service." Works both ways.
- The final prompt constrains output. "One thing per dimension, not a list."
- The final prompt prioritizes by leverage, not severity. "Which dimension, if moved up 2 points, would make every other dimension easier to improve?"
- Pricing prompts present a range, not a number. The reader sets the final price.
Prompt Quality Gates
- Paste test: Copy the prompt, paste into Claude. Does the AI run the full guided session and produce the named deliverable?
- Cold reader test: Could someone unfamiliar with the field guide understand what the AI is asking at each step?
- Handoff test: For prompts 2–5, does the graceful handoff work both ways?
- Confirmation test: Does every step end with a question that pushes for honesty?
- Constraint test (final prompt): Does output stay constrained to one recommendation per dimension?