← Vault Index
Source: frameworks/kit-field-guide-production/07-field-guide-consultant-methodology.md

Consultant Methodology — Book Analysis and Field Guide Design

Where This Fits in the Production Path

The field guide build cannot start without editorial decisions. Those decisions come from a structured analysis of the book, not from skimming it and picking ideas that sound good. This file documents how to do that analysis.

This is the equivalent of the extraction interview for SOPs. The book is the source material. The consultant is the extractor. The editorial decisions are the extraction output. Without this step, the builder has no inputs — and gaps filled by guessing produce a field guide nobody trusts.


Before the Analysis

Confirm the Book

The book must meet three criteria before analysis begins:

  1. The audience recognizes it. Practice owners in the $500K–$2M range should have heard of this book, seen it recommended, or felt the problem it addresses. If you have to explain what the book is about before the field guide makes sense, it's the wrong book.
  1. It argues for change. The book must have a thesis that creates tension — "you're doing X, you should be doing Y." A descriptive book ("here's how businesses work") doesn't produce exercises or deliverables. A prescriptive book ("stop doing this, start doing that") does.
  1. It produces artifacts. When you apply the book's ideas, the reader should walk away with something concrete — a filter, a scorecard, a map, a blueprint, a model. If the book's ideas only produce insight ("now I understand why..."), it's content for a blog post, not a field guide.

If the book fails any of these three, stop. Pick a different book.

Read the Full Book

Not a summary. Not the chapter headings. Not a podcast about the book. Read it. As you read, keep a running list:


The Analysis: Selecting 5 Ideas

From Candidates to Finalists

You'll have 8–15 candidate ideas. Five get selected. Run each candidate through these filters:

Filter 1 — Does it produce a deliverable? State the deliverable in concrete noun form: "[Idea] produces a [specific artifact]."

If you can't name the deliverable as a concrete noun, the idea doesn't belong in the field guide. It may work as body copy on a concept slide for an idea that does produce a deliverable.

Filter 2 — Does it build on the previous? The 5 ideas must form a chain. Each session's output becomes input for the next session. Map the chain:

If two ideas are independent (neither needs the other's output), one of them should be cut or repositioned. A field guide where the sessions are interchangeable is a list, not a system.

Filter 3 — Can the reader assess themselves? At least 3 of the 5 ideas must have an interactive exercise. The exercise creates a reframe before the AI session — by the time the reader opens the prompt, they've already changed their mind about what they have.

For each candidate, ask: "Could the reader check, rate, score, or sort something related to this idea and see a result?" If yes, it's an interactive candidate. If the only way to work with the idea is through the AI session, it's a static candidate.

Filter 4 — Does it span the book's arc? Plot your 5 selections against the book's structure. If all 5 come from chapters 2–5 of a 12-chapter book, you've missed the arc. The reader should feel like they've worked through the book's full argument — diagnosis to action.

Filter 5 — Is the sequence right? The 5 ideas should escalate in discomfort and specificity:

The final idea should produce the deliverable that most clearly reveals the gap between where the reader is and where they need to be. That gap is what the bridge slide converts.

Output of the Selection

After running the filters, produce a one-line summary for each of the 5 selections:

Idea 01: [Idea Name] — argues [X], produces [Deliverable Name] — INTERACTIVE (checklist/rating/slider)
Idea 02: [Idea Name] — argues [X], produces [Deliverable Name] — STATIC (process flow/comparison)
Idea 03: [Idea Name] — argues [X], produces [Deliverable Name] — INTERACTIVE (checklist/rating/slider)
Idea 04: [Idea Name] — argues [X], produces [Deliverable Name] — STATIC (process flow/comparison)
Idea 05: [Idea Name] — argues [X], produces [Deliverable Name] — INTERACTIVE (checklist/rating/slider)

This output feeds directly into File 01's Decision 1, 2, and 3.


Designing the Mental Model

The mental model (Slide 3) frames the book's core argument visually. Every book worth building a field guide for has a core contrast — two states, two paths, two approaches. The mental model makes that contrast concrete.

Finding the Contrast

Ask: "What two states does this book compare?"

The contrast should be immediately recognizable. The reader should see themselves on one side and understand — before doing any exercise — which side they need to move toward.

Building the Two Columns

For each side of the contrast, define:

The traits should mirror each other — trait 1 on the left has a counterpart on trait 1 on the right. This makes the contrast scannable.

The Hard Truth

One sentence in a dark callout box. This is the uncomfortable insight the reader needs before they start the exercises. It names the specific cost of staying on the wrong side of the contrast.

Test: If the reader nods and moves on without discomfort, the hard truth isn't hard enough.


Designing the Interactives

Checklist Exercise

Use when: The idea asks the reader to evaluate items against criteria.

Design process:

  1. List 5–7 items the reader would check (services, tasks, activities, deliverables)
  2. Define 2–3 pass/fail criteria that appear as tags when checked
  3. Write the summary text for three scenarios: all pass, all fail, mixed
  4. Write the frame header instruction: "YOUR EXERCISE — [what to do] + [what the result means]"

Critical constraint: Every item label must fit on one line at 900px container width. If it wraps, shorten it. Test with the longest label first.

Rating Exercise

Use when: The idea asks the reader to rate activities on a scale (how much you're involved, how dependent this is on you, etc.).

Design process:

  1. List 5–7 activities to rate
  2. Define the rating scale (1–4 is standard) with named levels
  3. Assign stoplight colors: red → orange → yellow → green
  4. Write vertical legend with level name (bold) and one-sentence description
  5. Write verdict text for three distribution patterns: mostly low, mixed, mostly high
  6. Write the frame header instruction

Critical constraint: Activity text must fit alongside the rating buttons on one line. If it wraps, the layout breaks on mobile. Shorten to ~40 characters.

Slider Exercise

Use when: The idea asks the reader to score dimensions on a spectrum.

Design process:

  1. List the dimensions (typically 4–6)
  2. Write anchor labels for each dimension — what 1 looks like and what 10 looks like in daily practice. Not abstract ("bad" / "good") — specific ("I offer whatever clients ask for" / "One defined service, repeated")
  3. Write the weakest-dimension text for three scenarios: score ≤3, score 4–6, score 7+
  4. Write the frame header instruction

Critical constraint: Anchor labels must be specific enough that the reader knows where to put themselves without asking "what does a 5 mean?"


Engineering the Workflow Prompts

The Prompt Design Session

For each of the 5 ideas, you're designing a guided conversation between the reader and an AI facilitator. The prompt is a script for that conversation.

Step 1: Define the deliverable. What specific artifact does this session produce? Name it. "A specialization filter" not "an analysis."

Step 2: Define the facilitator role. What kind of expert would guide this work? "A pricing strategist for professional services" not "a helpful assistant."

Step 3: Map the 5 steps. Each step should:

Step 4: Design the confirmation questions. These are the heart of the prompt. Bad confirmations let the reader agree and move on. Good confirmations force the reader to confront what they've just produced.

Bad: "Does this look right?" Good: "Does this feel honest? Are you being generous with yourself on any of these?"

Bad: "Shall we continue?" Good: "Which of these scores surprised you — and in which direction?"

Step 5: Design the handoffs. Every prompt after Session 01 must work two ways:

The prompt cannot break if the reader skips a session.

Prompt Testing Protocol

Every prompt must be tested by actually pasting it into Claude and running the full session with realistic inputs. "The prompt looks solid" is not a test result.

Testing checklist:


Signals That the Analysis Is Incomplete

Watch for these patterns. Each one is a signal to go back and do more work.

Can't name the deliverable: "This session helps the reader understand..." — understanding is not a deliverable. If you can't name a concrete noun, the idea isn't ready.

Sessions are independent: If Session 03's output doesn't reference Session 02's output, the chain is broken. Either reorder or replace the idea.

All ideas from one section: The field guide feels like it covers one chapter deeply instead of the book's full argument. Step back and check the arc.

Interactive design feels forced: If the exercise is "rate how much you agree with this concept," it's not interactive — it's a quiz about the book. The exercise should reveal something about the reader's practice, not test their comprehension.

Confirmation questions are soft: If every step ends with "Does this look good?" the prompts will produce polite agreement instead of honest self-assessment. Every confirmation should make the reader pause.

The mental model doesn't create discomfort: If the reader can look at the two-column comparison and say "I'm fine where I am," the contrast isn't sharp enough. The reader should see themselves on the wrong side and feel the pull toward the other.


Connection to the Engagement

The field guide is a free product that drives diagnostic call bookings. Every design decision serves that goal:

If a field guide produces five deliverables and the reader feels complete — like they have everything they need — the bridge won't convert. The deliverables should make the reader more informed about their gaps, not less in need of help. The diagnosis is the product. The implementation is the service.