Preview: https://markdownlivepreview.com/
Concept Brief — SOW Machine
Date: 2026-03-26 Status: Draft — validating direction Position in series: Session 3 of 3 (Intensive: Find → Prove → Close) Intensive session: Thursday — "Close It — In 3 Minutes, Not 3 Hours" Handraiser ancestor: Scope-to-SOW Converter (Skill #3) — same output type, but SOW Machine adds Practice Brain context + proof integration
The Problem
Practice owners lose deals to delay, not to competitors. Three ways proposals fail them:
- They take too long — 2-4 hours per proposal means they only write them for "sure things," leaving viable opportunities unscoped and unsigned
- Proof never makes it in — even when they have great case studies and testimonials, proposals go out as scope + price with no evidence attached
- Scoping lives in their head — no system means inconsistent proposals, forgotten deliverables, scope creep invitations, and pricing that doesn't reflect the actual work
IP Direction (Source Material)
The SOW Machine draws from five GPT assistants that each solve a piece of the proposal problem. The skill fuses them into a single system that reads from Practice Brain + prospect conversation + Proof Engine output and produces a complete, scoped, priced SOW with proof attached.
| Concept | What It Captures | Vault IP Source |
|---|---|---|
| Offer Creator — Internal Offer Map | Three-stage progressive offer architecture: Diagnostic (entry) → Project (natural upsell) → Continuity (earned, not pitched). Scope boundaries: what's included, what's not, top 3 scope-creep risks, common derailers. COI (Cost of Inaction) calculation with three formulas: hours leak, pipeline slip, margin leak. | GPT: ip-library/Offer Creator GPT.txt — the deepest offer architecture methodology. Diagnostic → Project → Continuity structure, scope creep pressure testing, COI formulas, test snippets for sales conversations. |
| Offer Brief — 12-Element Extraction | Converts prospect conversation into a structured internal document. 12 elements: Voice Match, Exact Words, Step Placement (4-step ladder: Crisis/Problem/Growth/Vision), Honest Outcome, Right-Sized Deliverables, Timeline, Delivery Format, Access Boundaries, Scope Protection, ROI Match, Investment, Confidence Score. | GPT: ip-library/Offer Brief Generator.txt — sequential extraction framework with step placement methodology. The intermediate artifact between "conversation happened" and "SOW is written." |
| Scope Upgrade Script Generator | Two-option response framework for scope changes: Option A (proceed with billable addition) and Option B (strategic deferral). Menu-style phrase options for opener, boundary, investment, and CTA. Service-type taxonomy: done-for-you, advisory, done-with-you, retainer. | GPT: ip-library/Scope Upgrade Script Generator.txt — 2-option framework, 8-step intake, service-type classification. Drives scope definition methodology and pricing model selection. |
| Signature Offer Builder | Complete offer architecture: Offer Snapshot, Positioning Statement, Ideal Client Profile, Problem & Promise, Scope of Work (in/out), Delivery Phases Map (per phase: name, objective, activities, deliverables, completion criteria), Engagement Model, Fit & Filters. | GPT: ip-library/You are the Signature Offer Builder part of The E.txt — 8-question diagnostic producing a comprehensive offer document. The Signature Offer IS the Practice Brain's services catalog entry. |
| One-Page Proposal Generator | Seven-section proposal format: Header, Title, Opening Context, Your Recommendation (phased deliverables), How We'll Work Together, Investment & ROI, Clear Next Step. ~40 lines enforced. One number, no options, one recommendation. 5-gate readiness check. Conflict Resolution Protocol. | GPT: ip-library/# One-Page Proposal Generator.txt — the output template. Assumes sale is confirmed; needs adaptation for proposals that still need to do persuasion work. Note: file marked with # prefix, possibly deprecated. |
These five GPTs solve different pieces of the same puzzle: Offer Creator maps the architecture. Offer Brief extracts the conversation data. Scope Upgrade defines boundaries. Signature Offer documents the service. One-Page Proposal formats the output. The SOW Machine fuses them into a single skill that takes conversation notes + Practice Brain and produces a ready-to-send proposal.
Micro-Magnet Archive (Searched)
Five micro-magnets directly relevant to SOW Machine methodology. These are published content assets — the underlying methodology lives in the GPT sources above, but these confirm the IP is audience-tested:
| File | Covers |
|---|---|
ip-library/micromagnet-archive-2026-3-15/The One-Page Proposal Method.docx | One-page proposal methodology — audience-facing version of the One-Page Proposal Generator GPT |
ip-library/micromagnet-archive-2026-3-15/Offer Brief Framework for B2B Experts.docx | Offer structure and positioning — audience-facing version of the Offer Brief Generator GPT |
ip-library/micromagnet-archive-2026-3-15/Streamlined Offer Worksheet.docx | Offer worksheet and scoping tool — simplified scoping methodology |
ip-library/micromagnet-archive-2026-3-15/How B2B Experts Stop Competing on Price.docx | Value-based pricing strategy — informs Investment & ROI section framing |
ip-library/micromagnet-archive-2026-3-15/The $30K Conversation You Keep Missing_ The Discovery Bridge Method™.docx | Discovery/scoping conversation methodology for high-ticket deals — informs extraction logic |
Additional related: Why Price Shoppers Target You...Authority Diagnosis Protocol.docx (prospect filtering, adjacent to HQP Triage Prep).
Campaign Folders (Searched)
No dedicated proposal/SOW campaign exists. Two files touch adjacent territory:
| File | Covers |
|---|---|
campaigns/sync-tax/sync-tax-high-ticket-architect.docx | High-ticket service architecture — closest to scoping methodology in campaigns |
campaigns/wrong-clock/offer-clock-finder.html | Offer timing positioning — when to propose, not how |
Assessment: Campaign IP is thin for this skill. The deep methodology lives in the GPT sources, not campaigns. No campaign has addressed proposal writing directly — this is a gap the Intensive fills.
Adjacent Existing IP (Reference, Not Source)
- Offer Page Kit (
content/frameworks/offer-page/) — 13-section offer page architecture with Transformation Framework (Symptom → Constraint → System → Outcome), proof section architecture, distinction section (IS/ISN'T), investment formatting. Reference for how the SOW structures its narrative arc. - Reporter-Partner Gap concept brief (
content-pipeline/concept-briefs/concept-brief-08-reporter-partner-gap.md) — Decision Cadence Alignment. Adjacent: SOW timing depends on understanding when the client makes decisions. - Pain Dissipation Gap concept brief (
content-pipeline/concept-briefs/concept-brief-01-pain-dissipation-gap.md) — Delivery-to-Discovery Bridge. Adjacent: proposals that arrive while pain is fresh close faster. - MVO Discovery Assistant GPT (
ip-library/MVO Discovery Assistant - GPT Instructions.txt) — Minimum Viable Offer methodology for identifying the crisis-level problem that closes fast. Adjacent: informs which tier to propose when the prospect is in crisis mode. - HQP Triage Prep GPT (
ip-library/HQP Triage Prep Assistant.txt) — 6-dimension pre-call scoring. Adjacent: qualifying context that informs whether a SOW should even be written. business-aos/reference/proof/angles/invisible-work.md— thin reference only for scope justification framing.
IP Gaps & Upgrades
IP Upgrade: One-Page Proposal Generator at
ip-library/# One-Page Proposal Generator.txt. Current state: 7-section proposal format with enforced brevity and 5-gate readiness check. Designed as a CONFIRMATION tool — assumes the sale is already made ("verbal yes on investment"). Needs adaptation: the SOW Machine may need to do persuasion work, not just confirmation. The 5-gate readiness check needs relaxing or replacing. The Opening Context section needs proof integration capability. Content interview required to validate: is the one-page format always right? When do larger engagements need expanded SOW sections (terms, assumptions, dependencies, change order process)?
IP Upgrade: Offer Brief Generator at
ip-library/Offer Brief Generator.txt. Current state: 12-element sequential extraction via Q&A. Designed as an interactive conversation (user answers questions one at a time). Needs adaptation: the SOW Machine must PARSE prospect conversation notes and extract the 12 elements automatically — not ask questions. Content interview required to enrich: what conversation patterns map to each of the 12 elements? What does "Voice Match" look like in notes vs. in real-time conversation? How does the skill infer Step Placement from conversation language?
IP Upgrade: Signature Offer Builder at
ip-library/You are the Signature Offer Builder part of The E.txt. Current state: produces a complete offer architecture document. Designed as a one-time setup tool — maps the offer once. Needs adaptation: the Practice Brain should CONTAIN Signature Offer documents (one per service). The SOW Machine reads from them when matching prospect needs to services. Content interview required to validate: is the Signature Offer format the right structure for Practice Brain services catalog entries? What fields are essential for the SOW Machine to read?
IP Gap: No documented methodology for converting a prospect conversation into a scoped proposal. The Offer Creator handles offer ARCHITECTURE (internal map). The Offer Brief handles conversation EXTRACTION (structured notes). The One-Page Proposal handles output FORMATTING. But no existing IP covers the ROUTING logic — given this conversation, which service tier matches? Given this prospect's situation, what scope is right? Given their stated problem, which deliverables apply? Content interview required to extract: how does Kathryn go from "I just talked to someone who needs X" to "here's what I'd propose"? What's the decision tree?
IP Gap (partially addressed 2026-03-27): Proof integration into proposals. Handoff mode decided: both structured file (auto) and manual selection available. Remaining gap: (a) matching logic — when proof library has multiple case studies, how to match the right one to the prospect's situation (industry, problem type, outcome desired), (b) placement logic — Opening Context? Investment & ROI? Standalone attachment?, (c) framing logic — "A firm like yours..." adaptation. Content interview required to extract: how does Kathryn decide which case study to reference in a proposal? What makes proof relevant vs. generic?
IP Gap: Multi-service proposals — current IP assumes one offer per proposal. Practice owners often need to propose bundled services (Diagnostic + Project) or phased engagements (Phase 1 now, Phase 2 in Q2). Content interview required: how does Kathryn handle multi-service scoping? When does she bundle vs. phase? How does pricing change for bundles?
Design Constraint Check
| Constraint | How This Skill Meets It |
|---|---|
| Can't fail | Three inputs, all available: (1) Prospect conversation notes — something they already have (email thread, call notes, or a Client Expansion Finder opportunity). (2) Practice Brain — services catalog and pricing from The Groundwork. (3) Proof Engine output — case study from yesterday's session. If they only have conversation notes, the skill still produces a SOW — just without proof integration. Kathryn is in the room for live support. |
| Sustainable | Run every time a prospect conversation happens. "Someone expresses interest → run the SOW Machine." The speed (3 minutes vs. 3 hours) means they write proposals for opportunities they'd previously skip. Scope Protection Notes prevent recurring scope creep. |
| Win fast | By Session 3, the full loop is visible: found it Tuesday, built proof Wednesday, closing it Thursday. The SOW is real — a proposal they can send TODAY to a real prospect, with their real services, at their real pricing, with proof from their real engagement. The win isn't a template — it's a sent proposal. |
| Non-technical | Three inputs, all already produced by the system: conversation notes (from their own prospect interaction), Practice Brain (built during The Groundwork), Proof Engine output (from yesterday's session). Paste and run. Kathryn is live for the first run. No configuration beyond the skill install. |
| 10-100x value | Professional proposal writing runs $500-$2,000 per engagement. The SOW Machine produces a scoped, priced, proof-integrated proposal in minutes — and they can run it every time a prospect conversation happens. One closed deal from a proposal they would have skipped writing pays for the Intensive 30x over. |
Quality Bar
$97 for 3 sessions should feel like $1,000+. They should feel this session alone was worth the price of the entire Intensive — and they built it themselves in under an hour. The SOW output should read like a $500–$2,000 proposal writing service — scoped, priced, proof-integrated, and sendable.
"They should leave Session 3 thinking: I just wrote a proposal with proof attached in 3 minutes. It used to take me 4 hours and I never included proof."
- The SOW reads like a professional proposal — not a filled-in template, but a document that references the prospect's actual words and situation
- Proof is INTEGRATED, not appended — the case study reference in the Opening Context makes the prospect think "that sounds like me"
- Scope Protection Notes prevent the #1 post-close problem (scope creep) before it starts
- The investment section ties price to ROI in the prospect's language — not a rate card
- Paired with Client Expansion Finder + Proof Engine: the full loop closes. Find → Prove → Close is one system, and the SOW is the output.
Input Design
Three inputs (all produced by the system):
- Prospect conversation notes — email thread, call notes, or a Client Expansion Finder opportunity that converted into a conversation. Messy is fine. The skill extracts the 12 Offer Brief elements automatically.
- Practice Brain — services catalog (what they offer, pricing tiers, delivery formats, scope templates), engagement model (how they work), and voice preferences. Produced during The Groundwork and refined through Sessions 1-2.
- Proof Engine output (optional but powerful) — case study draft and testimonial from Session 2. The skill matches proof to the prospect's situation and integrates it into the SOW.
Zero-friction test:
| Question | Answer |
|---|---|
| Does the user already have this data? | Yes — conversation notes are from a real prospect interaction they already had. Practice Brain is on their machine. Proof Engine output is from yesterday. |
| Can they paste it in under 2 minutes? | Yes — paste conversation notes, Practice Brain is already loaded, Proof Engine output is already on their machine. |
| Does it work with messy, incomplete data? | Yes — partial conversation notes produce partial SOWs. Missing pricing = the skill flags it and uses ranges. Missing proof = SOW goes out without proof (still better than the 4-hour version). |
| Is there a second input path? | Yes — instead of conversation notes, they can describe the prospect's situation verbally (Kathryn guides live). Or use a Client Expansion Finder outreach response as the conversation input. |
Key difference from handraiser version (Scope-to-SOW Converter): The handraiser takes plain language and produces a basic SOW. The SOW Machine takes prospect conversation notes + Practice Brain (services, pricing, format, voice) + Proof Engine output (case studies, testimonials) and produces a complete proposal with proof integration, scope protection, and investment framing. Context-rich vs. context-free. The handraiser demonstrates the approach. The Intensive deploys the system.
Foundational Dependency
The SOW Machine works at three power levels:
Minimum (conversation notes only): Produces a basic scoped proposal from conversation details. No service matching, no pricing from catalog, no proof. Functional but generic.
Standard (conversation notes + Practice Brain): Full power — service matching from catalog, pricing from tier structure, scope from templates, voice from preferences. This is the expected Intensive experience.
Maximum (conversation notes + Practice Brain + Proof Engine output): The full loop — everything above PLUS proof integration. The SOW opens with a case study reference, includes a testimonial pull-quote in the investment section, and attaches the full case study. This is the Session 3 demo moment.
For the Intensive: By Session 3, participants have all three inputs. They've completed The Groundwork (Practice Brain), run the Proof Engine (case study + testimonial from Session 2), and brought prospect conversation notes. The full loop is visible.
Upgrade path: Inside Practice Builders, participants build deeper services catalogs, accumulate proof libraries, and refine scope templates over time. Each SOW gets better as the system inputs get richer. Proof selection becomes more targeted as the library grows.
The Skill Output (Sections)
| # | Section | Job |
|---|---|---|
| 1 | Prospect Snapshot | Who they are, what they said, what they need — in their words, from the conversation notes |
| 2 | Service Match | Which services from the Practice Brain catalog match the prospect's stated needs. Table: Need Expressed → Service Matched → Why This Fits |
| 3 | Scope Definition | What's in, what's out, deliverables per phase, completion criteria. Uses Diagnostic → Project → Continuity structure where applicable. |
| 4 | Investment & ROI | Price, format (project/retainer/phased), ROI framing in the prospect's language. COI calculation if data supports it. One number, one recommendation. |
| 5 | Proof Integration | Matched case study excerpt and/or testimonial pull-quote from Proof Engine output. Placed in context: "A firm like yours..." framing. If no proof available, section states what's missing. |
| 6 | SOW Document | The actual proposal — formatted, ready to send. Follows One-Page Proposal structure: Header, Title, Opening Context (with proof), Recommendation (phased deliverables), How We'll Work, Investment & ROI, Next Step. |
| 7 | Scope Protection Notes | Anticipated scope creep (top 3 requests to watch for), boundary language for each, when to re-scope vs. absorb. Internal-facing — not in the proposal itself. |
Extraction Logic (Signal Types Equivalent)
The skill extracts structured data from messy conversation notes. Each extraction maps to an Offer Brief element:
| What's Extracted | From the Conversation | Maps To | Rooted In |
|---|---|---|---|
| Problem Statement | Prospect's exact words describing what's wrong | Offer Brief Elements 1-2 (Voice Match, Exact Words) | Offer Brief Generator (12-element extraction) |
| Step Placement | Crisis (bandaid needed now), Problem (foundation needed), Growth (expansion), Vision (transformation) | Offer Brief Element 3 (4-step ladder) | Offer Brief Generator + MVO Discovery (crisis identification) |
| Outcome Expected | What the prospect said they want to achieve | Offer Brief Element 4 (Honest Outcome) | Offer Brief Generator |
| Service Match | Prospect's needs matched against Practice Brain catalog | Signature Offer Builder (services → scope mapping) | Signature Offer Builder + Offer Creator (tier matching) |
| Scope Boundaries | What's in, what's out, anticipated creep | Offer Brief Elements 8-9 (Access Boundaries, Scope Protection) | Offer Creator GPT (scope creep pressure testing) + Scope Upgrade Script Generator (boundary language) |
| Pricing Signal | Budget hints, value perception, urgency | Offer Brief Elements 10b (Investment) + COI | Offer Creator GPT (COI formulas) + Profit Lead Detector (rate methodology) |
| Proof Match | Prospect's industry, problem type, desired outcome → matched to proof library | Proof Integration section | Gap — methodology not documented. Needs design. |
Cohesion Check — Intensive Series Arc
| # | Skill | Session | Job | Throughline |
|---|---|---|---|---|
| 1 | Client Expansion Finder | Tue — Find | Find growth hiding in your existing client base | Your practice already has the clients |
| 2 | Proof Engine | Wed — Prove | Turn past engagements into proof that sells for you | Your practice already has the proof |
| 3 | SOW Machine | Thu — Close | Write scoped proposals in minutes, not hours | Your practice already has the deal |
Skill #2 → Skill #3 connection: Proof Engine produces a case study draft and a testimonial. SOW Machine reads that output and integrates it into the proposal. "You built the proof yesterday. Today it goes into a proposal you can send." The case study becomes the Opening Context ("A firm like yours faced the same problem — here's what happened"). The testimonial becomes an Investment & ROI anchor.
Full loop visible by Session 3: Found opportunities on Tuesday (Client Expansion Finder) → built proof on Wednesday (Proof Engine) → closing the deal on Thursday (SOW Machine). The proposal references a real client, uses real services, quotes real pricing, and attaches real proof. Nothing generic. Nothing hypothetical.
Handraiser → Intensive upgrade: Scope-to-SOW Converter (handraiser Skill #3) takes plain language and produces a basic SOW. SOW Machine (Intensive) takes prospect conversation notes + Practice Brain + Proof Engine output and produces a complete proposal with proof integration, scope protection, and investment framing. The handraiser is a converter. The Intensive is a system.
The closing arc: "Day in the Life" — quarterly Client Expansion Finder scan, every engagement completion triggers Proof Engine, every prospect conversation runs through SOW Machine. Three skills, continuous loop: Find → Prove → Close → Deliver → Prove → Close...
Teaching Story
TBD — needs real testing.
Kathryn runs the SOW Machine on a real prospect conversation (or a Client Expansion Finder opportunity that she follows up on) and reports:
- What prospect conversation did she use? How messy were the notes?
- Did the skill correctly extract the prospect's problem and step placement?
- Did the service match pull the right services from her catalog?
- Was the scope definition complete? Did it miss anything she'd normally include?
- Was the pricing right? Did the COI calculation land?
- Did the proof integration work? Was the case study reference relevant to the prospect?
- Was the SOW sendable? Would she send it as-is or with edits?
- How long did it take? (The "3 minutes vs. 3 hours" claim needs evidence.)
- The Scope Protection Notes — did they anticipate real scope creep she's seen before?
- The "it was right" moment — what surprised her about the output?
Distribution
| Field | Value |
|---|---|
| Trigger word | TBD |
| Delivery method | Installed during Session 3 with Kathryn live |
| Practice Brain required | Yes — services catalog + pricing + engagement model |
| Proof Engine output used | Yes (optional) — case study + testimonial for integration |
| Series position | Session 3 of 3 (finale) |
| Input from | Proof Engine (Session 2) for proof. Prospect conversation notes for scope. Practice Brain for services/pricing. |
| Output | The deliverable. A real proposal they can send today. |
| Draft skill file | None yet — build after brief validation via Skill Build Kit |
Open Questions
- SOW vs. One-Page Proposal: Is the output always a one-page proposal? Or should the skill detect engagement size and produce a one-page version (for <$5K engagements) and an expanded SOW (for >$5K with terms, assumptions, dependencies)? The One-Page Proposal Generator enforces ~40 lines. Is that always right?
- Proof selection logic — dual handoff decided (2026-03-27). Two modes: (A) Structured file from Proof Engine auto-feeds into SOW Machine — zero-friction, proof flows in without manual selection. (B) Participant manually selects which proof to include, guided by Kathryn. Both modes built; Kathryn chooses per session. Remaining question: when the proof library has MULTIPLE case studies (from repeat Proof Engine runs), how does auto-selection work? By industry? Problem type? Outcome type? Recency?
- Pricing source: Where does pricing live in the Practice Brain? Signature Offer Builder documents it per service. Is that sufficient, or does the skill need a separate pricing table? What about volume discounts, phased pricing, or founding rates?
- Scope templates: Do participants build scope templates during The Groundwork, or does the skill generate scope from scratch each time? Templates would improve consistency. But they need to be built.
- Multi-service proposals: Current IP assumes one offer. How does the skill handle "I need help with X and Y"? Two separate SOWs? One combined? Phased with milestones?
- Relationship to Scope-to-SOW Converter: Same question as Skill #1 — entirely separate build, or does SOW Machine start from the handraiser skill and add Practice Brain + proof integration? Less build work if they share a foundation.
- Post-proposal scope protection: The Scope Protection Notes are internal-facing. Should the skill also produce scope protection language for the SOW itself (e.g., "Change Order Process" section)?
- Demo data: What prospect conversation does Kathryn use for the Session 3 live demo? A real prospect? A Client Expansion Finder opportunity she followed up on? A re-creation?
Next Steps
- [ ] Kathryn validates this brief
- [ ] Content interview: prospect conversation → scope conversion logic (Kathryn's decision tree)
- [ ] Content interview: proof integration strategy (which proof where, matching logic)
- [ ] Content interview: multi-service proposal handling (bundle vs. phase, pricing implications)
- [ ] Design decision: SOW format (one-page always? or adaptive by engagement size?)
- [ ] Design decision: Practice Brain services catalog structure (Signature Offer format? separate pricing table?)
- [ ] Review and confirm One-Page Proposal Generator status (deprecated? usable?)
- [ ] Resolve open questions (especially #1, #2, #6 — these affect build scope)
- [ ] Kathryn tests draft skill on a real prospect conversation + Practice Brain + Proof Engine output
- [ ] Capture teaching story from test results (the "3 minutes vs. 3 hours" claim)
- [ ] Build through Skill Build Kit process after brief is validated