← Vault Index
Source: business/marketing/content-pipeline/concept-briefs/skill-concept-brief-proof-engine.md

Preview: https://markdownlivepreview.com/

Concept Brief — Proof Engine

Date: 2026-03-26 Status: Draft — validating direction Position in series: Session 2 of 3 (Intensive: Find → Prove → Close) Intensive session: Wednesday — "Turn Every Engagement Into Your Next 3 Clients" Handraiser ancestor: Content-from-Delivery Engine (Skill #4) — partial overlap (LinkedIn posts), but Proof Engine produces the full proof suite Teased in Session 1 as: "You found $40K hiding in your roster. Now let's turn your best engagement into proof that sells for you."


The Problem

Practice owners produce excellent outcomes and never capture them. Three costs:

  1. Outcomes evaporate after delivery — they exist in memory, notes, and client conversations, but nobody writes them down in a form anyone else can see
  2. The Trust Tax — 20-30 minutes per prospect call spent on unpaid credibility auditions because they have nothing to point to ("Tell me about a time you helped someone like me" and the answer is a scramble)
  3. Past work is the best sales tool they'll never use — every completed engagement contains a case study, a testimonial, and a social proof asset, but without an extraction system, none of it gets captured

IP Direction (Source Material)

The Proof Engine's deepest IP comes from the Proof Gap campaign — a complete article, briefing script, and two interactive tools that document the extraction methodology. The skill operationalizes what the campaign teaches.

ConceptWhat It CapturesVault IP Source
The Proof GapThe space between outcomes produced and a prospect's ability to find them. Three costs: Trust Tax (20-30 min/call proving yourself), Comparison Trap (no proof = decisions default to fees), The Constraint (outcomes exist, extraction system doesn't).Angle: business-aos/reference/proof/angles/proof-gap.md. Campaign: campaigns/proof-gap/advisory-os-proof-gap.html (article with interactive visualizations).
Three Layers of Every EngagementSurface (what the owner would write — accurate, flat), Real (what emerges with follow-up — numbers, actions, before/after), Prospect (what makes someone pick up the phone — starts where the client started). The extraction framework that turns flat descriptions into compelling proof.Briefing script: campaigns/proof-gap/proof-gap-briefing-script-v3.md — the deepest methodological document. Defines the three layers, the extraction blind spots, and the composited case study pattern.
Four Trigger QuestionsHow to identify which engagements to document first: Has a client renewed without selling? Referred without being asked? Avoided a mistake? Situation changed meaningfully?Briefing script: campaigns/proof-gap/proof-gap-briefing-script-v3.md. Also: Story Finder tool: campaigns/proof-gap/advisory-os-proof-gap-story-finder.html — 5-question assessment that scores candidates across Specificity, Impact, Narrative Clarity, Readiness.
Proof Quality ScoringFour dimensions: Specificity (can you quantify it?), Impact (how significant?), Narrative Clarity (explainable outside your industry?), Readiness (can it be told without naming the client?).Story Finder tool: campaigns/proof-gap/advisory-os-proof-gap-story-finder.html
Proof Chain MaturityFour stages: Not Extracted → Not Documented → Not Deployed → Deployed but Not Converting. Where in the chain is the practice stuck?Readiness Check tool: campaigns/proof-gap/advisory-os-proof-readiness-check.html
One-Result ReframeGuided extraction: anchor the narrative on one specific result the client experienced. Produces warm check-in email and forward-motion email with booking link.GPT: ip-library/One-Result Reframe Assistant System Prompt.txt — conversational methodology for one-question-at-a-time extraction, two structured email outputs.
Offer Brief 4-Step LadderEngagement outcome categorization: Crisis (bandaid) → Problem (foundation) → Growth (rebuild) → Vision (transformation). Determines the narrative framing of the case study.GPT: ip-library/Offer Brief Generator.txt — 12-element framework with step placement methodology.
Case Study ArcThe complete structure of a publishable case study: Pattern → Client → Diagnosis → Fix → Results by Week → What It Unlocked → Constraint Pattern → Economics → Reuse Notes.Case study: business-aos/reference/proof/case-studies/sync-tax-arc.md — the golden example of a finished arc.
Testimonial StructureEach testimonial captured as: Quote, Symptom, Constraint, System Built, Result, Source, Offer. Six cross-cutting themes: revenue discovery, bottleneck removal, decision clarity, speed, practical over theoretical, cross-vertical transfer.Reference: business-aos/reference/proof/testimonials.md

Adjacent Existing IP (Reference, Not Source)

IP Gaps & Upgrades

IP Upgrade: Proof Gap Briefing Script at campaigns/proof-gap/proof-gap-briefing-script-v3.md. Current state: the deepest extraction methodology — Three Layers framework, Four Trigger Questions, "What He Skipped" blind spots, composited case study pattern. But designed as a TEACHING narrative for a video briefing, not as skill instructions. Needs adaptation: the methodology must become detection and generation rules the skill can execute. The Three Layers become output sections (Surface → Real → Prospect). The Four Trigger Questions become input validation (does this engagement qualify?). The "What He Skipped" items become quality checks.

IP Upgrade: One-Result Reframe Assistant GPT at ip-library/One-Result Reframe Assistant System Prompt.txt. Current state: conversational extraction producing 2 upgrade emails. Needs adaptation: the extraction methodology (one question at a time, anchor on one result, reflect patterns) should inform the testimonial request email structure. The warm, non-pushy tone transfers directly. The two-step approach (check-in → specific ask) becomes the testimonial request format.

IP Gap: No documented methodology for producing a case study draft from structured engagement details. The sync-tax-arc.md is a COMPLETED case study — it shows the output but not the production process. The briefing script describes the extraction methodology conceptually but doesn't define the skill's step-by-step generation logic. Content interview required to extract: how does Kathryn go from "here are the engagement details" to "here's the draft"? What's the decision tree? What makes one draft publishable and another generic?

IP Gap: No documented methodology for testimonial request emails specific to practice owners. The One-Result Reframe Assistant produces upgrade emails (selling more), not testimonial requests (asking for endorsement). Content interview required to extract: how does Kathryn ask for testimonials? What framing works? What timing? What makes the ask easy for the client to say yes to?

IP Gap: No documented methodology for turning engagement outcomes into LinkedIn posts. The Content-from-Delivery Engine (handraiser Skill #4) addresses this same job but hasn't been concept-briefed or built. These share methodology. Content interview required to extract: how does Kathryn translate an engagement outcome into a pattern-revealing LinkedIn post? What's the structure? How does she anonymize while keeping specificity?

Design Decision (2026-03-27): Session 2 → 3 Handoff — both options available. The Proof Engine supports two handoff modes:


Design Constraint Check

ConstraintHow This Skill Meets It
Can't failInput is details about ONE completed client engagement — something every practice owner has done. They don't need to remember everything; partial details produce partial proof. The skill fills gaps with questions rather than guessing. The Four Trigger Questions help them pick the RIGHT engagement (one with quantifiable outcomes). Kathryn is in the room walking them through it.
SustainableRun every time an engagement wraps. "Anytime you finish a piece of work, run the Proof Engine." Proof compounds — each run adds to the proof library. The Reuse Map tells them exactly where to deploy each asset.
Win fastSession 2 closes with: "Send the testimonial request tonight. Publish the LinkedIn post tonight." Two outputs that are immediately actionable. The case study draft is publishable within a day with light editing. The quick win isn't documenting — it's deploying.
Non-technicalInput is describing a past engagement — something every practice owner can do from memory. The skill produces outputs, not code. Kathryn walks participants through the input template live. No Claude configuration beyond having the skill installed (done during The Groundwork).
10-100x valueA professional case study costs $500-$3,000. A testimonial extraction service costs $200+. The Proof Engine produces both plus a LinkedIn post, a Proof Quality Score, and a Reuse Map — from one engagement, in under an hour. Multiply across engagements over time: each run adds to the proof library.

Quality Bar

$97 for 3 sessions should feel like $1,000+. They should feel this session alone was worth the price of the entire Intensive — and they built it themselves in under an hour.

"They should leave Session 2 thinking: I've had 50 great engagements and never captured a single one. This changes that."


Input Design

Primary input: Engagement details — structured notes about one completed client engagement. Includes: client profile (anonymized), engagement scope, what was delivered, specific outcomes (quantified where possible), timeline, what changed for the client, any feedback received.

The skill provides an input template (from the Four Trigger Questions and Story Finder methodology) that guides what to include. Participants fill this in during Session 2 based on the engagement they brought (Kathryn tells them at the end of Session 1: "Bring one client engagement you're proud of").

Second input path: Client Expansion Finder output (Skill 1). The Revenue Opportunity Summary identifies which clients have the strongest engagement histories. The participant selects one engagement from that list and provides the details.

Zero-friction test:

QuestionAnswer
Does the user already have this data?Yes — it's their own past work. They lived it.
Can they paste it in under 2 minutes?Yes — the input template guides what to include. Bullet points, not perfect prose.
Does it work with messy, incomplete data?Yes — partial details produce partial proof. Missing numbers = the skill flags it. Missing emotional context = the skill asks. Better than not capturing at all.
Is there a second input path?Yes — Client Expansion Finder output identifies the engagement. Details are then provided manually or from notes.

Key difference from handraiser version (Content-from-Delivery Engine): The handraiser produces 3 LinkedIn posts from a completed deliverable. The Proof Engine produces the FULL proof suite: case study draft (three-layer narrative), testimonial request email (pre-populated, ready to send), and LinkedIn post (pattern-revealing). Three outputs, not one. And the case study feeds into the SOW Machine the next day.


Foundational Dependency

The Proof Engine works without the Practice Brain foundations. It reads engagement details, not roster data. Any practice owner who has completed a client engagement can run it.

It works better with the Practice Brain. If the skill knows the provider's services catalog, it can frame the case study in terms of service capabilities. If it knows the provider's voice preferences, the LinkedIn post and testimonial request match their communication style. If it knows the provider's ICP (from The Groundwork's Practice Profile), it can write the case study's Prospect layer to attract similar clients.

For the Intensive: Participants have The Groundwork completed (Practice Brain exists). The Proof Engine reads it for voice and framing. But the core input is the engagement details they bring to Session 2.

Upgrade path: Inside Practice Builders, participants build a proof library over time — running the Proof Engine after every major engagement. Each new case study, testimonial, and LinkedIn post compounds. The SOW Machine gets more proof to draw from. The portfolio of proof becomes a competitive moat.


The Skill Output (Sections)

#SectionJob
1Engagement SnapshotClient (anonymized), scope, timeline, outcome summary — orientation for the reader
2Case Study DraftThree-layer narrative: Surface (flat summary), Real (numbers, actions, before/after), Prospect (starts where the client started — the hook). Two versions: named and anonymized.
3Testimonial Request EmailWarm, specific, ready to send. Pre-populated draft quote for the client to confirm or refine. References the specific result, not generic praise.
4LinkedIn PostPattern-revealing, not self-promotional. Leads with the prospect's situation. Short paragraphs, specific numbers. Anonymized.
5Proof Quality ScoreFour dimensions scored: Specificity, Impact, Narrative Clarity, Readiness. Flags what's strong and what needs enrichment before publishing.
6Reuse MapWhere to deploy each asset: sales conversations (which section to reference), website (where to publish), email sequences (how to excerpt), referral handoff (what to send), SOW Machine (which sections feed into proposals).

Proof Categories (Signal Types Equivalent)

The skill categorizes the engagement outcome into one primary proof type, which determines narrative framing:

CategoryWhat It ProvesDetection PatternRooted InMethodology Depth
Revenue DiscoveryFound or protected moneyQuantified revenue impact — gained, saved, or protectedProof Gap campaign (Trust Tax), testimonials.md theme 1Deep — Proof Gap campaign documents extraction methodology, Trust Tax calculation, three-layer narrative
Bottleneck RemovalOwner stopped being the constraintBefore: owner does X. After: system/team does X.Testimonials.md theme 2, Phantom Delegation concept briefPartial — testimonial structure documented, detection patterns not formalized
Decision ClarityKnew where to focusBefore: spread thin. After: clear priority + execution path.Testimonials.md theme 3, Constraint Priority Matrix frameworkDeep — CPM framework fully documented with typing, tiering, pattern matching
SpeedResults in weeks, not monthsTimeline compression — expected duration vs. actualTestimonials.md theme 4Partial — testimonial examples exist, no formal extraction methodology
Capability TransferTeam can do what only the owner couldBefore: owner-dependent. After: documented, delegable.Testimonials.md theme 5, Educate and Delegate frameworkPartial — Educate and Delegate source exists (pre-AI, needs kit conversion)
Risk PreventionDisaster avoided or caught earlyCounterfactual — what would have happened without interventionCase study arc (sync-tax-arc.md), "What It Unlocked" sectionDeep — sync-tax-arc case study documents complete arc including counterfactual

Quality Checks (from "What He Skipped")

The skill automatically checks each output against three extraction blind spots from the briefing script:

  1. Emotional starting point present? Does the case study start with the PERSON, not the process? (Surface layer trap: leading with methodology instead of the client's experience.)
  2. Compounding outcome highlighted? Is the ongoing/recurring impact in the headline, not buried? (Extraction gap: treating the one-time fix as the whole story when the real value is what it unlocked.)
  3. Specificity sufficient? Are there actual numbers, or just vague "improved" language? (Surface layer trap: "streamlined operations" vs. "$200K in one quarter.")

Cohesion Check — Intensive Series Arc

#SkillSessionJobThroughline
1Client Expansion FinderTue — FindFind growth hiding in your existing client baseYour practice already has the clients
2Proof EngineWed — ProveTurn past engagements into proof that sells for youYour practice already has the proof
3SOW MachineThu — CloseWrite scoped proposals in minutes, not hoursYour practice already has the deal

Skill #1 → Skill #2 connection: Client Expansion Finder identifies which clients have the strongest engagement outcomes and the highest expansion potential. Those engagements become the input for Proof Engine. "You found $40K hiding in your roster yesterday. Today, let's turn your best engagement into proof that helps you close the next one."

Skill #2 → Skill #3 connection: Proof Engine produces a case study draft and a testimonial. Tomorrow, the SOW Machine attaches that proof to a real proposal. "You built the proof today. Tomorrow, it goes into a proposal you can send."

Handraiser → Intensive upgrade: Content-from-Delivery Engine (handraiser Skill #4) produces 3 LinkedIn posts from a deliverable. Proof Engine (Intensive) produces the full proof suite — case study, testimonial request, AND LinkedIn post — plus a Reuse Map and Proof Quality Score. The handraiser is one output. The Intensive is three outputs plus the system for deploying them.


Teaching Story

TBD — needs real testing.

Kathryn runs the Proof Engine on a real completed engagement and reports:


Distribution

FieldValue
Trigger wordTBD
Delivery methodInstalled during Session 2 with Kathryn live
Practice Brain used forVoice calibration, service framing, ICP targeting
Series positionSession 2 of 3
Input fromClient Expansion Finder (Session 1) identifies which engagement to document
Output feedsSOW Machine (Session 3) — case study + testimonial attached to proposals
Draft skill fileNone yet — build after brief validation via Skill Build Kit

Open Questions

  1. Proof output format: Does the skill produce a single document with all three outputs, or three separate files? The SOW Machine needs to READ the case study — does it need a structured file, or does the participant manually select which proof to include?
  2. Anonymization logic: How aggressive? Names swapped? Industry changed? Numbers rounded? The case study needs to be specific enough to be compelling but anonymized enough to be publishable. Where's the line?
  3. Testimonial request timing: When to send — immediately after engagement wraps? After a cool-down period? The One-Result Reframe suggests striking while results are fresh. Is that always right?
  4. LinkedIn post anonymization vs. named: Does the post name the client (with permission) or always anonymize? Different rules for different practice types?
  5. Proof library architecture: When participants run this repeatedly, where do proof assets accumulate? A proof folder in the Practice Brain? How does the SOW Machine find and select the right proof later?
  6. Case study length: The golden example (sync-tax-arc.md) is comprehensive — Pattern through Reuse Notes, ~1,500 words. Is that the target, or should the skill produce a shorter "proof card" format (~500 words) that's faster to produce and easier to deploy?
  7. Relationship to Content-from-Delivery Engine: Same methodology for the LinkedIn post component? Should these be designed together?

Next Steps