Preview: https://markdownlivepreview.com/
Concept Brief — Skill #4: Content-from-Delivery Engine
Date: 2026-03-26 Status: Draft — validating direction Position in series: Skill 4 of 5 Teased in DM 3 of Skill #3 as: TBD — Skill #3 DM sequence not yet built. Planned tease: "You just closed a deal. The next skill turns that delivery into content that brings in the next client."
The Problem
Practice owners produce excellent work and never turn it into anything anyone else can see. Three costs:
- Outcomes evaporate after delivery — they exist in memory, notes, and client conversations, but nobody writes them down in a form a prospect can find
- They spend 20-30 minutes per prospect call proving they can do the work — an unpaid credibility audition because they have nothing to point to ("Tell me about a time you helped someone like me" and the answer is a scramble)
- Every completed engagement contains a LinkedIn post, a case study snippet, and a testimonial seed — but without an extraction system, none of it gets captured
IP Direction (Source Material)
The Content-from-Delivery Engine's deepest IP comes from the Proof Gap campaign — a complete article, briefing script, and two interactive tools that document the extraction methodology. The skill operationalizes what the campaign teaches.
| Concept | What It Captures | Vault IP Source |
|---|---|---|
| Three Layers of Every Engagement | Surface (what the owner would write — accurate, flat), Real (what emerges with follow-up — numbers, actions, before/after), Prospect (what makes someone pick up the phone — starts where the client started). The extraction framework that transforms flat descriptions into compelling content. | Briefing script: campaigns/proof-gap/proof-gap-briefing-script-v3.md — the deepest methodological document. Defines the three layers, extraction blind spots, and the composited case study pattern. |
| Four Trigger Questions | Which engagements to document first: Has a client renewed without selling? Referred without being asked? Avoided a mistake? Situation changed meaningfully? | Briefing script: campaigns/proof-gap/proof-gap-briefing-script-v3.md. Also: Story Finder tool: campaigns/proof-gap/advisory-os-proof-gap-story-finder.html — 5-question assessment scoring Specificity, Impact, Narrative Clarity, Readiness. |
| "What He Skipped" — Extraction Blind Spots | Three mistakes that make content flat: (1) Missing emotional starting point — leading with methodology instead of the client's experience, (2) Burying compounding outcomes — treating the one-time fix as the whole story when the real value is what it unlocked, (3) Vague language — "streamlined operations" vs. "$200K in one quarter." | Briefing script: campaigns/proof-gap/proof-gap-briefing-script-v3.md, Section 3B. These become the skill's output quality checks. |
| Proof Quality Scoring | Four dimensions: Specificity (can you quantify it?), Impact (how significant?), Narrative Clarity (explainable outside your industry?), Readiness (can it be told without naming the client?). | Story Finder tool: campaigns/proof-gap/advisory-os-proof-gap-story-finder.html |
| One-Result Reframe | Extraction methodology: anchor the narrative on one specific result. Warm, non-pushy tone. One question at a time. Reflects patterns back. Two structured outputs. | GPT: ip-library/One-Result Reframe Assistant System Prompt.txt — conversational extraction methodology with two email outputs. The tone and specificity approach transfers directly to content production. |
| Proof Type Categories | Six categories for classifying engagement outcomes: Revenue Discovery, Bottleneck Removal, Decision Clarity, Speed, Capability Transfer, Risk Prevention. Each category determines the narrative framing. | Proof Engine concept brief: content-pipeline/concept-briefs/skill-concept-brief-proof-engine.md — IP Direction table. Testimonials: business-aos/reference/proof/testimonials.md — 6 themes confirmed. |
These concepts ARE the IP. The skill combines them: select the engagement (Four Trigger Questions), extract the story (Three Layers), classify the proof type (6 categories), apply quality checks (What He Skipped), and produce the content (LinkedIn posts).
The Proof Gap campaign is the methodological backbone. The briefing script v3 contains the extraction philosophy. The Story Finder tool contains the scoring logic. The article contains the teaching framework. This skill turns all of that into an automated extraction-to-content system.
Micro-Magnet Archive (Searched)
No micro-magnets directly address content extraction from deliverables. Adjacent titles:
| File | Covers |
|---|---|
ip-library/micromagnet-archive-2026-3-15/The Existing Client First Launch™.docx | Launching new services to existing clients — adjacent to content-from-delivery as a growth strategy |
Assessment: The micro-magnet archive is thin for this skill. The deep methodology lives in the Proof Gap campaign assets and the GPT assistants, not in micro-magnets.
Campaign Folders (Searched)
The Proof Gap campaign folder is the primary source:
| File | Covers | Classification |
|---|---|---|
campaigns/proof-gap/proof-gap-briefing-script-v3.md | Three Layers, Four Trigger Questions, What He Skipped, composited case study pattern | Source — deepest extraction methodology |
campaigns/proof-gap/advisory-os-proof-gap-story-finder.html | 4-dimension scoring: Specificity, Impact, Narrative Clarity, Readiness | Source — engagement selection logic |
campaigns/proof-gap/advisory-os-proof-readiness-check.html | 4-stage proof chain: Not Extracted → Not Documented → Not Deployed → Deployed but Not Converting | Source — maturity assessment |
campaigns/proof-gap/advisory-os-proof-gap.html | Published article with interactive visualizations of the Three Layers | Reference — teaching framework |
campaigns/proof-gap/proof-gap-briefing-walkthrough.html | Interactive hover-driven walkthrough of the extraction gap | Reference — visual explanation |
Adjacent Existing IP (Reference, Not Source)
- Client Updates That Write Themselves workshop (
ip-library/Client Updates That Write Themselves/Workshop_Client Updates That Write Themselves.txt) — Input template design for extracting value from meeting notes. 12 strategic decisions systematized. Adjacent: the structured-input-to-content pattern transfers. - Proof Gap Briefing Landing Page (
campaigns/proof-gap/proof-gap-briefing.html) — Time-limited briefing page with Case Study Build offer ($497). Reference: the extraction service this skill replaces. - Improvised Recurring Work concept brief (
content-pipeline/concept-briefs/concept-brief-03-improvised-recurring-work.md) — Recurring Work Rhythm. Adjacent: documented recurring work is easier to extract content from. - AOS Interactive Narrative Kit (
content/frameworks/aos-interative-narrative/) — Article production methodology. Reference: informs the skill's content output format patterns. business-aos/reference/proof/angles/proof-gap.md— Proof Gap angle. Thin reference — the campaign has the real depth.business-aos/reference/proof/case-studies/sync-tax-arc.md— Completed case study arc showing the output standard. Reference for what "done" looks like.business-aos/reference/proof/testimonials.md— 6 themes confirmed: revenue discovery, bottleneck removal, decision clarity, speed, practical over theoretical, cross-vertical transfer.
IP Gaps & Upgrades
IP Upgrade: Proof Gap Briefing Script at
campaigns/proof-gap/proof-gap-briefing-script-v3.md. Current state: the deepest extraction methodology — Three Layers, Four Trigger Questions, What He Skipped, composited case study pattern. But designed as a teaching narrative for a video briefing, not as skill instructions. Adaptation path (no interview required): Three Layers becomes output structure (Surface → Real → Prospect rewrite of each engagement), Four Trigger Questions becomes input validation (does this engagement qualify?), What He Skipped becomes automated quality checks on the output.
IP Gap: LinkedIn post structure methodology — no documented system for translating a three-layer engagement narrative into a pattern-revealing LinkedIn post. The Proof Engine concept brief (Intensive version) flags this as a shared gap. Content interview required to extract: how Kathryn translates an engagement outcome into a pattern-revealing LinkedIn post, the structure she uses, how she anonymizes while keeping specificity, and what makes a post pattern-revealing vs. self-promotional.
IP Gap: Anonymization logic — the Proof Gap campaign mentions composited case studies but doesn't formalize the rules. Content interview required to extract: how aggressive anonymization is, where the line falls between compelling and publishable, and whether different practice types require different levels.
Design Constraint Check
| Constraint | How This Skill Meets It |
|---|---|
| Can't fail | One input: describe a completed client engagement. What you did, what happened, what changed for the client. Bullet points, not prose. Messy is fine — partial details produce partial content. The Four Trigger Questions help them pick the RIGHT engagement (one with a clear outcome). If they can remember a client engagement, they can run this. |
| Sustainable | Run every time you finish a piece of work. "Engagement wraps → run Content-from-Delivery → publish this week." Content compounds — each run adds to the content library. The more you run it, the more proof you have on LinkedIn. |
| Win fast | First run produces 3 LinkedIn posts they can publish TODAY. Not drafts to refine — posts ready to copy-paste. The win isn't "I documented an engagement" — it's "I published something I've been sitting on for 6 months." |
Quality Bar
The recipient should feel fortunate they got this for free. Slightly guilty they didn't pay for it. The LinkedIn posts should read like a $500 content strategist produced them — pattern-revealing, specific, and immediately publishable.
- LinkedIn posts are pattern-revealing, not self-promotional — lead with the prospect's situation, not the provider's expertise
- Each post uses the Three Layers methodology — not a flat "we helped Client X achieve Y" summary
- Posts include specific numbers where the engagement supports them — "$200K in one quarter" not "improved revenue"
- Anonymization is clean — compelling without identifying the client
- Paired with Skill #1: the Client Intelligence Brief reveals what's happening. Skill #4 turns what happened into content.
- Paired with Skill #3: the Scope-to-SOW Converter closes the deal. Skill #4 turns the delivery into the next lead.
Input Design
Primary input: Engagement details — a description of one completed client engagement. What the client's situation was, what was delivered, what changed. Bullet points, rough notes, voice memo transcript — the skill extracts structure from whatever is pasted.
The skill provides an input prompt based on the Four Trigger Questions and Story Finder methodology:
- What changed for the client? (Before → After)
- Can you quantify the result? (Revenue, time saved, risk avoided, people affected)
- What was the client dealing with before you started? (The emotional starting point)
- Would you be comfortable sharing this anonymized? (Readiness check)
Second input path: Scope-to-SOW Converter output (Skill #3). If the user just closed a deal and delivered the work, they can paste the SOW + delivery notes together. The skill extracts the before (from the SOW's Opening Context) and the after (from delivery notes).
Zero-friction test:
| Question | Answer |
|---|---|
| Does the user already have this data? | Yes — they did the work. They lived it. The details are in their head, their notes, or their project files. |
| Can they paste it in under 2 minutes? | Yes — bullet points describing what happened. Not a formatted case study. |
| Does it work with messy, incomplete data? | Yes — partial details produce partial posts. Missing numbers = the skill flags it. Missing emotional context = the skill asks. Better than not capturing at all. |
| Is there a second input path? | Yes — Skill #3 output (SOW) + delivery notes. |
Key difference from Intensive version (Proof Engine): The handraiser produces 3 LinkedIn posts from a completed engagement. The Proof Engine produces the FULL proof suite: case study draft (three-layer narrative), testimonial request email (pre-populated, ready to send), AND LinkedIn post — plus a Reuse Map and Proof Quality Score. The handraiser is one output format (LinkedIn). The Intensive is three output formats plus the system for deploying them.
Foundational Skill Dependency
The Content-from-Delivery Engine works WITHOUT the foundational skills (Service List, ICP, Voice). Describe an engagement, get LinkedIn posts.
It works BETTER with them:
- With Service List: The skill frames the engagement in terms of specific, named services from the catalog. Posts reference the service by name, making them more concrete.
- With ICP: The skill writes the Prospect layer targeting the right reader. Without ICP, the post targets "someone like this client." With ICP, it targets the specific type of prospect the provider wants more of.
- With Voice: The posts match the provider's LinkedIn voice. Without it, the skill uses pattern-revealing default tone.
For this campaign: The skill works standalone. No prerequisites beyond having completed a client engagement.
Inside Practice Builders OS: Members build the foundations, then this skill becomes the Proof Engine — producing the full proof suite (case study, testimonial request, LinkedIn post) with voice calibration, ICP targeting, and proof quality scoring. Each engagement compounds the proof library. That's the upgrade path.
The Skill Output (Sections)
| # | Section | Job |
|---|---|---|
| 1 | Engagement Snapshot | What happened — client (anonymized), scope, timeline, outcome summary. Orientation for the content. |
| 2 | Proof Type Classification | Which of the 6 categories this engagement best fits: Revenue Discovery, Bottleneck Removal, Decision Clarity, Speed, Capability Transfer, Risk Prevention. Determines narrative framing. |
| 3 | Three-Layer Extraction | The engagement retold at three levels: Surface (flat summary), Real (numbers, actions, before/after), Prospect (starts where the client started — the hook). This is the raw material for all posts. |
| 4 | LinkedIn Posts (3) | Three different posts from the same engagement — each using a different angle or proof type emphasis. Pattern-revealing, not self-promotional. Anonymized. Ready to copy-paste and publish. |
| 5 | Quality Check | Three blind spot checks from "What He Skipped": (1) Emotional starting point present? (2) Compounding outcome highlighted? (3) Specificity sufficient? Flags what's strong and what needs enrichment. |
| 6 | Content Score | Four dimensions scored: Specificity, Impact, Narrative Clarity, Readiness. Overall assessment of how publishable this content is. Tells the user exactly what to strengthen if it's not ready. |
Content Extraction Types
| Type | What It Catches | Rooted In | Methodology Available |
|---|---|---|---|
| Revenue Story | Found, saved, or protected money — quantified impact with dollar figures | Proof Gap campaign (Trust Tax), testimonials theme 1 | Deep — Proof Gap extraction methodology documented. Needs adaptation from teaching narrative to skill instructions. |
| Transformation Story | Before: X. After: Y. Clear structural change — bottleneck removed, system built, capability transferred | Testimonials themes 2, 3, 5; Constraint Priority Matrix framework | Deep — testimonial structure documented, CPM framework available for structural change language. |
| Speed Story | Expected timeline vs. actual — the "results in weeks, not months" narrative | Testimonials theme 4, sync-tax case study arc | Deep — sync-tax case study shows the complete arc. Timeline compression is well-documented. |
| Prevention Story | Disaster avoided or caught early — the counterfactual ("what would have happened without intervention") | Case study arc: sync-tax-arc.md "What It Unlocked" section | Partial — one example exists. Counterfactual framing methodology not formalized beyond the single case. |
| Pattern Story | A recurring pattern the provider sees across clients — not one client's result, but a systemic observation | AOS narrative methodology (pattern-revealing voice) | Gap — the voice is documented, but the methodology for extracting PATTERNS from individual engagements is not. Content interview required to extract: how does Kathryn spot a pattern in one client's outcome and frame it as a universal observation? |
| Insight Story | The "I didn't expect this" finding — a non-obvious outcome that teaches something about the problem space | Proof Gap briefing script ("What He Skipped" — compounding outcomes) | Partial — the concept of surfacing compounding outcomes is documented, but the methodology for identifying insight-worthy moments needs enrichment. |
Cohesion Check — Series Arc
| # | Skill | Job | Throughline |
|---|---|---|---|
| 1 | Client Intelligence Brief | See what's happening with active clients | You already have the information |
| 2 | Hidden Revenue Scan | Find money in relationships you already have | You already have the revenue |
| 3 | Scope-to-SOW Converter | Convert conversations into proposals | You already have the opportunity |
| 4 | Content-from-Delivery Engine | Turn client work into marketing | You already have the content |
| 5 | Referral Activator | Grow through clients you already have | You already have the network |
Throughline: "You already have everything you need to grow." Each skill reveals what's already there and builds a system to capture it.
Skill #1 → Skill #2 connection: The Client Intelligence Brief shows what's happening with active clients. The Hidden Revenue Scan finds money sitting in those same relationships. "You can see what's happening. Now find the revenue you're leaving on the table."
Skill #3 → Skill #4 connection: Scope-to-SOW closes the deal. The delivery happens. Content-from-Delivery turns that delivery into LinkedIn posts that attract the next client. "You closed the deal with Skill #3. You delivered the work. Now Skill #4 turns that delivery into 3 posts that bring in the next one."
Skill #4 → Skill #5 tease: "You've got content publishing. Now the last skill activates the people who already trust you to send business your way."
Handraiser → Intensive upgrade: Content-from-Delivery Engine (handraiser) produces 3 LinkedIn posts from a completed engagement. Proof Engine (Intensive) produces the full proof suite — case study draft (three-layer narrative), testimonial request email (pre-populated, ready to send), AND LinkedIn post — plus a Reuse Map and Proof Quality Score. The handraiser is one output format. The Intensive is three formats plus the deployment system.
Teaching Story
TBD — needs real testing.
Kathryn runs the Content-from-Delivery Engine on a real completed engagement and reports:
- Which engagement did she choose? Why?
- Did the Four Trigger Questions help her pick, or did she already know?
- How did the Three Layers change what she'd normally write? What did the Prospect layer add?
- Were the 3 LinkedIn posts publishable? Would she post any of them as-is?
- Were they pattern-revealing or self-promotional? Did they pass the "would a prospect read this and think about their own situation" test?
- What did the Quality Check flag? Were the blind spots real?
- What did the Content Score show? Was the engagement ready to publish?
- The "it was right" moment — what surprised her about the output?
Distribution
| Field | Value |
|---|---|
| Trigger word | TBD |
| Delivery URL | TBD |
| Cloudinary URL | TBD |
| Series position | Skill 4 of 5 |
| Next skill teaser | TBD (Referral Activator direction) |
| Draft skill file | None yet — build after brief validation via Skill Build Kit |
Open Questions
- Post count: Does the skill always produce exactly 3 posts, or does it adapt (1-3 based on what's extractable)? Three feels right for demonstrating the methodology, but a thin engagement might only support 1-2 quality posts.
- Scope: Handraiser = LinkedIn posts only. Or should it also produce a testimonial request seed and a case study snippet (mini proof suite)? The Intensive version has all three. The handraiser could have a stripped version that teases the full suite.
- Anonymization rules: How aggressive? The Proof Gap campaign mentions compositing — changing details while keeping the pattern. Is there a standard? Or does it depend on the client relationship?
- LinkedIn voice: Pattern-revealing, not self-promotional. But what does that look like structurally? Short paragraphs? First person? Opening with a question? Opening with a scenario? This is the LinkedIn post structure gap.
- Engagement depth minimum: What's the minimum engagement that produces publishable content? A 2-hour strategy session? A 6-month project? Where's the floor?
- Relationship to Proof Engine: Same question as other skills — entirely separate build, or does Content-from-Delivery become the Proof Engine's base with full proof suite as the upgrade? Shared extraction methodology reduces build work.
- Content calendar integration: Does the skill suggest when to publish? If they run it monthly (one engagement per month = 3 posts per month), does it suggest a publishing cadence?
Next Steps
- [ ] Kathryn validates this brief
- [ ] Content interview: LinkedIn post structure methodology (how does Kathryn translate outcomes into pattern-revealing posts?)
- [ ] Content interview: anonymization rules (how aggressive, what's the standard?)
- [ ] Content interview: pattern extraction from individual engagements (how does one client's result become a universal observation?)
- [ ] Design decision: 3 posts always, or adaptive count?
- [ ] Design decision: LinkedIn only, or mini proof suite (post + testimonial seed + case study snippet)?
- [ ] Resolve open questions (especially #2 and #6 — scope and relationship to Proof Engine affect build)
- [ ] Kathryn tests draft skill on a real completed engagement
- [ ] Capture teaching story from test results
- [ ] Build through Skill Build Kit process after brief is validated