← Vault Index
Source: business/marketing/content-pipeline/concept-briefs/skill-concept-brief-sow-machine.md

Preview: https://markdownlivepreview.com/

Concept Brief — SOW Machine

Date: 2026-03-26 Status: Draft — validating direction Position in series: Session 3 of 3 (Intensive: Find → Prove → Close) Intensive session: Thursday — "Close It — In 3 Minutes, Not 3 Hours" Handraiser ancestor: Scope-to-SOW Converter (Skill #3) — same output type, but SOW Machine adds Practice Brain context + proof integration


The Problem

Practice owners lose deals to delay, not to competitors. Three ways proposals fail them:

  1. They take too long — 2-4 hours per proposal means they only write them for "sure things," leaving viable opportunities unscoped and unsigned
  2. Proof never makes it in — even when they have great case studies and testimonials, proposals go out as scope + price with no evidence attached
  3. Scoping lives in their head — no system means inconsistent proposals, forgotten deliverables, scope creep invitations, and pricing that doesn't reflect the actual work

IP Direction (Source Material)

The SOW Machine draws from five GPT assistants that each solve a piece of the proposal problem. The skill fuses them into a single system that reads from Practice Brain + prospect conversation + Proof Engine output and produces a complete, scoped, priced SOW with proof attached.

ConceptWhat It CapturesVault IP Source
Offer Creator — Internal Offer MapThree-stage progressive offer architecture: Diagnostic (entry) → Project (natural upsell) → Continuity (earned, not pitched). Scope boundaries: what's included, what's not, top 3 scope-creep risks, common derailers. COI (Cost of Inaction) calculation with three formulas: hours leak, pipeline slip, margin leak.GPT: ip-library/Offer Creator GPT.txt — the deepest offer architecture methodology. Diagnostic → Project → Continuity structure, scope creep pressure testing, COI formulas, test snippets for sales conversations.
Offer Brief — 12-Element ExtractionConverts prospect conversation into a structured internal document. 12 elements: Voice Match, Exact Words, Step Placement (4-step ladder: Crisis/Problem/Growth/Vision), Honest Outcome, Right-Sized Deliverables, Timeline, Delivery Format, Access Boundaries, Scope Protection, ROI Match, Investment, Confidence Score.GPT: ip-library/Offer Brief Generator.txt — sequential extraction framework with step placement methodology. The intermediate artifact between "conversation happened" and "SOW is written."
Scope Upgrade Script GeneratorTwo-option response framework for scope changes: Option A (proceed with billable addition) and Option B (strategic deferral). Menu-style phrase options for opener, boundary, investment, and CTA. Service-type taxonomy: done-for-you, advisory, done-with-you, retainer.GPT: ip-library/Scope Upgrade Script Generator.txt — 2-option framework, 8-step intake, service-type classification. Drives scope definition methodology and pricing model selection.
Signature Offer BuilderComplete offer architecture: Offer Snapshot, Positioning Statement, Ideal Client Profile, Problem & Promise, Scope of Work (in/out), Delivery Phases Map (per phase: name, objective, activities, deliverables, completion criteria), Engagement Model, Fit & Filters.GPT: ip-library/You are the Signature Offer Builder part of The E.txt — 8-question diagnostic producing a comprehensive offer document. The Signature Offer IS the Practice Brain's services catalog entry.
One-Page Proposal GeneratorSeven-section proposal format: Header, Title, Opening Context, Your Recommendation (phased deliverables), How We'll Work Together, Investment & ROI, Clear Next Step. ~40 lines enforced. One number, no options, one recommendation. 5-gate readiness check. Conflict Resolution Protocol.GPT: ip-library/# One-Page Proposal Generator.txt — the output template. Assumes sale is confirmed; needs adaptation for proposals that still need to do persuasion work. Note: file marked with # prefix, possibly deprecated.

These five GPTs solve different pieces of the same puzzle: Offer Creator maps the architecture. Offer Brief extracts the conversation data. Scope Upgrade defines boundaries. Signature Offer documents the service. One-Page Proposal formats the output. The SOW Machine fuses them into a single skill that takes conversation notes + Practice Brain and produces a ready-to-send proposal.

Micro-Magnet Archive (Searched)

Five micro-magnets directly relevant to SOW Machine methodology. These are published content assets — the underlying methodology lives in the GPT sources above, but these confirm the IP is audience-tested:

FileCovers
ip-library/micromagnet-archive-2026-3-15/The One-Page Proposal Method.docxOne-page proposal methodology — audience-facing version of the One-Page Proposal Generator GPT
ip-library/micromagnet-archive-2026-3-15/Offer Brief Framework for B2B Experts.docxOffer structure and positioning — audience-facing version of the Offer Brief Generator GPT
ip-library/micromagnet-archive-2026-3-15/Streamlined Offer Worksheet.docxOffer worksheet and scoping tool — simplified scoping methodology
ip-library/micromagnet-archive-2026-3-15/How B2B Experts Stop Competing on Price.docxValue-based pricing strategy — informs Investment & ROI section framing
ip-library/micromagnet-archive-2026-3-15/The $30K Conversation You Keep Missing_ The Discovery Bridge Method™.docxDiscovery/scoping conversation methodology for high-ticket deals — informs extraction logic

Additional related: Why Price Shoppers Target You...Authority Diagnosis Protocol.docx (prospect filtering, adjacent to HQP Triage Prep).

Campaign Folders (Searched)

No dedicated proposal/SOW campaign exists. Two files touch adjacent territory:

FileCovers
campaigns/sync-tax/sync-tax-high-ticket-architect.docxHigh-ticket service architecture — closest to scoping methodology in campaigns
campaigns/wrong-clock/offer-clock-finder.htmlOffer timing positioning — when to propose, not how

Assessment: Campaign IP is thin for this skill. The deep methodology lives in the GPT sources, not campaigns. No campaign has addressed proposal writing directly — this is a gap the Intensive fills.

Adjacent Existing IP (Reference, Not Source)

IP Gaps & Upgrades

IP Upgrade: One-Page Proposal Generator at ip-library/# One-Page Proposal Generator.txt. Current state: 7-section proposal format with enforced brevity and 5-gate readiness check. Designed as a CONFIRMATION tool — assumes the sale is already made ("verbal yes on investment"). Needs adaptation: the SOW Machine may need to do persuasion work, not just confirmation. The 5-gate readiness check needs relaxing or replacing. The Opening Context section needs proof integration capability. Content interview required to validate: is the one-page format always right? When do larger engagements need expanded SOW sections (terms, assumptions, dependencies, change order process)?

IP Upgrade: Offer Brief Generator at ip-library/Offer Brief Generator.txt. Current state: 12-element sequential extraction via Q&A. Designed as an interactive conversation (user answers questions one at a time). Needs adaptation: the SOW Machine must PARSE prospect conversation notes and extract the 12 elements automatically — not ask questions. Content interview required to enrich: what conversation patterns map to each of the 12 elements? What does "Voice Match" look like in notes vs. in real-time conversation? How does the skill infer Step Placement from conversation language?

IP Upgrade: Signature Offer Builder at ip-library/You are the Signature Offer Builder part of The E.txt. Current state: produces a complete offer architecture document. Designed as a one-time setup tool — maps the offer once. Needs adaptation: the Practice Brain should CONTAIN Signature Offer documents (one per service). The SOW Machine reads from them when matching prospect needs to services. Content interview required to validate: is the Signature Offer format the right structure for Practice Brain services catalog entries? What fields are essential for the SOW Machine to read?

IP Gap: No documented methodology for converting a prospect conversation into a scoped proposal. The Offer Creator handles offer ARCHITECTURE (internal map). The Offer Brief handles conversation EXTRACTION (structured notes). The One-Page Proposal handles output FORMATTING. But no existing IP covers the ROUTING logic — given this conversation, which service tier matches? Given this prospect's situation, what scope is right? Given their stated problem, which deliverables apply? Content interview required to extract: how does Kathryn go from "I just talked to someone who needs X" to "here's what I'd propose"? What's the decision tree?

IP Gap (partially addressed 2026-03-27): Proof integration into proposals. Handoff mode decided: both structured file (auto) and manual selection available. Remaining gap: (a) matching logic — when proof library has multiple case studies, how to match the right one to the prospect's situation (industry, problem type, outcome desired), (b) placement logic — Opening Context? Investment & ROI? Standalone attachment?, (c) framing logic — "A firm like yours..." adaptation. Content interview required to extract: how does Kathryn decide which case study to reference in a proposal? What makes proof relevant vs. generic?

IP Gap: Multi-service proposals — current IP assumes one offer per proposal. Practice owners often need to propose bundled services (Diagnostic + Project) or phased engagements (Phase 1 now, Phase 2 in Q2). Content interview required: how does Kathryn handle multi-service scoping? When does she bundle vs. phase? How does pricing change for bundles?


Design Constraint Check

ConstraintHow This Skill Meets It
Can't failThree inputs, all available: (1) Prospect conversation notes — something they already have (email thread, call notes, or a Client Expansion Finder opportunity). (2) Practice Brain — services catalog and pricing from The Groundwork. (3) Proof Engine output — case study from yesterday's session. If they only have conversation notes, the skill still produces a SOW — just without proof integration. Kathryn is in the room for live support.
SustainableRun every time a prospect conversation happens. "Someone expresses interest → run the SOW Machine." The speed (3 minutes vs. 3 hours) means they write proposals for opportunities they'd previously skip. Scope Protection Notes prevent recurring scope creep.
Win fastBy Session 3, the full loop is visible: found it Tuesday, built proof Wednesday, closing it Thursday. The SOW is real — a proposal they can send TODAY to a real prospect, with their real services, at their real pricing, with proof from their real engagement. The win isn't a template — it's a sent proposal.
Non-technicalThree inputs, all already produced by the system: conversation notes (from their own prospect interaction), Practice Brain (built during The Groundwork), Proof Engine output (from yesterday's session). Paste and run. Kathryn is live for the first run. No configuration beyond the skill install.
10-100x valueProfessional proposal writing runs $500-$2,000 per engagement. The SOW Machine produces a scoped, priced, proof-integrated proposal in minutes — and they can run it every time a prospect conversation happens. One closed deal from a proposal they would have skipped writing pays for the Intensive 30x over.

Quality Bar

$97 for 3 sessions should feel like $1,000+. They should feel this session alone was worth the price of the entire Intensive — and they built it themselves in under an hour. The SOW output should read like a $500–$2,000 proposal writing service — scoped, priced, proof-integrated, and sendable.

"They should leave Session 3 thinking: I just wrote a proposal with proof attached in 3 minutes. It used to take me 4 hours and I never included proof."


Input Design

Three inputs (all produced by the system):

  1. Prospect conversation notes — email thread, call notes, or a Client Expansion Finder opportunity that converted into a conversation. Messy is fine. The skill extracts the 12 Offer Brief elements automatically.
  1. Practice Brain — services catalog (what they offer, pricing tiers, delivery formats, scope templates), engagement model (how they work), and voice preferences. Produced during The Groundwork and refined through Sessions 1-2.
  1. Proof Engine output (optional but powerful) — case study draft and testimonial from Session 2. The skill matches proof to the prospect's situation and integrates it into the SOW.

Zero-friction test:

QuestionAnswer
Does the user already have this data?Yes — conversation notes are from a real prospect interaction they already had. Practice Brain is on their machine. Proof Engine output is from yesterday.
Can they paste it in under 2 minutes?Yes — paste conversation notes, Practice Brain is already loaded, Proof Engine output is already on their machine.
Does it work with messy, incomplete data?Yes — partial conversation notes produce partial SOWs. Missing pricing = the skill flags it and uses ranges. Missing proof = SOW goes out without proof (still better than the 4-hour version).
Is there a second input path?Yes — instead of conversation notes, they can describe the prospect's situation verbally (Kathryn guides live). Or use a Client Expansion Finder outreach response as the conversation input.

Key difference from handraiser version (Scope-to-SOW Converter): The handraiser takes plain language and produces a basic SOW. The SOW Machine takes prospect conversation notes + Practice Brain (services, pricing, format, voice) + Proof Engine output (case studies, testimonials) and produces a complete proposal with proof integration, scope protection, and investment framing. Context-rich vs. context-free. The handraiser demonstrates the approach. The Intensive deploys the system.


Foundational Dependency

The SOW Machine works at three power levels:

Minimum (conversation notes only): Produces a basic scoped proposal from conversation details. No service matching, no pricing from catalog, no proof. Functional but generic.

Standard (conversation notes + Practice Brain): Full power — service matching from catalog, pricing from tier structure, scope from templates, voice from preferences. This is the expected Intensive experience.

Maximum (conversation notes + Practice Brain + Proof Engine output): The full loop — everything above PLUS proof integration. The SOW opens with a case study reference, includes a testimonial pull-quote in the investment section, and attaches the full case study. This is the Session 3 demo moment.

For the Intensive: By Session 3, participants have all three inputs. They've completed The Groundwork (Practice Brain), run the Proof Engine (case study + testimonial from Session 2), and brought prospect conversation notes. The full loop is visible.

Upgrade path: Inside Practice Builders, participants build deeper services catalogs, accumulate proof libraries, and refine scope templates over time. Each SOW gets better as the system inputs get richer. Proof selection becomes more targeted as the library grows.


The Skill Output (Sections)

#SectionJob
1Prospect SnapshotWho they are, what they said, what they need — in their words, from the conversation notes
2Service MatchWhich services from the Practice Brain catalog match the prospect's stated needs. Table: Need Expressed → Service Matched → Why This Fits
3Scope DefinitionWhat's in, what's out, deliverables per phase, completion criteria. Uses Diagnostic → Project → Continuity structure where applicable.
4Investment & ROIPrice, format (project/retainer/phased), ROI framing in the prospect's language. COI calculation if data supports it. One number, one recommendation.
5Proof IntegrationMatched case study excerpt and/or testimonial pull-quote from Proof Engine output. Placed in context: "A firm like yours..." framing. If no proof available, section states what's missing.
6SOW DocumentThe actual proposal — formatted, ready to send. Follows One-Page Proposal structure: Header, Title, Opening Context (with proof), Recommendation (phased deliverables), How We'll Work, Investment & ROI, Next Step.
7Scope Protection NotesAnticipated scope creep (top 3 requests to watch for), boundary language for each, when to re-scope vs. absorb. Internal-facing — not in the proposal itself.

Extraction Logic (Signal Types Equivalent)

The skill extracts structured data from messy conversation notes. Each extraction maps to an Offer Brief element:

What's ExtractedFrom the ConversationMaps ToRooted In
Problem StatementProspect's exact words describing what's wrongOffer Brief Elements 1-2 (Voice Match, Exact Words)Offer Brief Generator (12-element extraction)
Step PlacementCrisis (bandaid needed now), Problem (foundation needed), Growth (expansion), Vision (transformation)Offer Brief Element 3 (4-step ladder)Offer Brief Generator + MVO Discovery (crisis identification)
Outcome ExpectedWhat the prospect said they want to achieveOffer Brief Element 4 (Honest Outcome)Offer Brief Generator
Service MatchProspect's needs matched against Practice Brain catalogSignature Offer Builder (services → scope mapping)Signature Offer Builder + Offer Creator (tier matching)
Scope BoundariesWhat's in, what's out, anticipated creepOffer Brief Elements 8-9 (Access Boundaries, Scope Protection)Offer Creator GPT (scope creep pressure testing) + Scope Upgrade Script Generator (boundary language)
Pricing SignalBudget hints, value perception, urgencyOffer Brief Elements 10b (Investment) + COIOffer Creator GPT (COI formulas) + Profit Lead Detector (rate methodology)
Proof MatchProspect's industry, problem type, desired outcome → matched to proof libraryProof Integration sectionGap — methodology not documented. Needs design.

Cohesion Check — Intensive Series Arc

#SkillSessionJobThroughline
1Client Expansion FinderTue — FindFind growth hiding in your existing client baseYour practice already has the clients
2Proof EngineWed — ProveTurn past engagements into proof that sells for youYour practice already has the proof
3SOW MachineThu — CloseWrite scoped proposals in minutes, not hoursYour practice already has the deal

Skill #2 → Skill #3 connection: Proof Engine produces a case study draft and a testimonial. SOW Machine reads that output and integrates it into the proposal. "You built the proof yesterday. Today it goes into a proposal you can send." The case study becomes the Opening Context ("A firm like yours faced the same problem — here's what happened"). The testimonial becomes an Investment & ROI anchor.

Full loop visible by Session 3: Found opportunities on Tuesday (Client Expansion Finder) → built proof on Wednesday (Proof Engine) → closing the deal on Thursday (SOW Machine). The proposal references a real client, uses real services, quotes real pricing, and attaches real proof. Nothing generic. Nothing hypothetical.

Handraiser → Intensive upgrade: Scope-to-SOW Converter (handraiser Skill #3) takes plain language and produces a basic SOW. SOW Machine (Intensive) takes prospect conversation notes + Practice Brain + Proof Engine output and produces a complete proposal with proof integration, scope protection, and investment framing. The handraiser is a converter. The Intensive is a system.

The closing arc: "Day in the Life" — quarterly Client Expansion Finder scan, every engagement completion triggers Proof Engine, every prospect conversation runs through SOW Machine. Three skills, continuous loop: Find → Prove → Close → Deliver → Prove → Close...


Teaching Story

TBD — needs real testing.

Kathryn runs the SOW Machine on a real prospect conversation (or a Client Expansion Finder opportunity that she follows up on) and reports:


Distribution

FieldValue
Trigger wordTBD
Delivery methodInstalled during Session 3 with Kathryn live
Practice Brain requiredYes — services catalog + pricing + engagement model
Proof Engine output usedYes (optional) — case study + testimonial for integration
Series positionSession 3 of 3 (finale)
Input fromProof Engine (Session 2) for proof. Prospect conversation notes for scope. Practice Brain for services/pricing.
OutputThe deliverable. A real proposal they can send today.
Draft skill fileNone yet — build after brief validation via Skill Build Kit

Open Questions

  1. SOW vs. One-Page Proposal: Is the output always a one-page proposal? Or should the skill detect engagement size and produce a one-page version (for <$5K engagements) and an expanded SOW (for >$5K with terms, assumptions, dependencies)? The One-Page Proposal Generator enforces ~40 lines. Is that always right?
  2. Proof selection logic — dual handoff decided (2026-03-27). Two modes: (A) Structured file from Proof Engine auto-feeds into SOW Machine — zero-friction, proof flows in without manual selection. (B) Participant manually selects which proof to include, guided by Kathryn. Both modes built; Kathryn chooses per session. Remaining question: when the proof library has MULTIPLE case studies (from repeat Proof Engine runs), how does auto-selection work? By industry? Problem type? Outcome type? Recency?
  3. Pricing source: Where does pricing live in the Practice Brain? Signature Offer Builder documents it per service. Is that sufficient, or does the skill need a separate pricing table? What about volume discounts, phased pricing, or founding rates?
  4. Scope templates: Do participants build scope templates during The Groundwork, or does the skill generate scope from scratch each time? Templates would improve consistency. But they need to be built.
  5. Multi-service proposals: Current IP assumes one offer. How does the skill handle "I need help with X and Y"? Two separate SOWs? One combined? Phased with milestones?
  6. Relationship to Scope-to-SOW Converter: Same question as Skill #1 — entirely separate build, or does SOW Machine start from the handraiser skill and add Practice Brain + proof integration? Less build work if they share a foundation.
  7. Post-proposal scope protection: The Scope Protection Notes are internal-facing. Should the skill also produce scope protection language for the SOW itself (e.g., "Change Order Process" section)?
  8. Demo data: What prospect conversation does Kathryn use for the Session 3 live demo? A real prospect? A Client Expansion Finder opportunity she followed up on? A re-creation?

Next Steps