CAMPAIGN PRODUCTION WORKFLOW
From Concept Brief to Finalized Evergreen Campaign
OVERVIEW
This is the end-to-end workflow for producing a multi-asset campaign for Advisory OS. Each campaign follows the same architecture: an interactive thought leadership article, two micro-tools, a briefing video (recording visual + landing page), and a distribution package (LinkedIn posts, emails, Substack notes, DM sequences). The campaign runs for one week with a timer-based expiration on the briefing, then converts to an evergreen asset set.
Campaigns produced using this workflow:
- The Subtract/Add Equation (Campaign 1)
- The Proof Gap (Campaign 2)
- The Politeness Premium (Campaign 3 — partial)
- The Silent List / No-Show Revival (Campaign 4)
Time to produce: 3–5 working sessions when the workflow is followed. Longer when steps are skipped or when assets are built before upstream decisions are locked.
PHASE 0: CONCEPT EXTRACTION
Input: Client work, observation, or existing framework Output: Campaign Input Brief
What Happens
The concept comes from one of three sources: something that happened in client work, a pattern observed across multiple clients, or an existing framework that needs to be campaigned. An interview agent (or structured conversation) extracts the core insight, the case material, and the asset architecture.
The Deliverable
A Campaign Input Brief containing:
- Core insight (one sentence)
- The tension (what the audience believes vs. what's actually true)
- Case material (anonymized or composited client story)
- Anchor phrase / campaign title
- Micro-tool concepts (what would they measure/discover?)
- Briefing case (different from article case — complementary, not redundant)
Skills & References
- Interview Agent (
interview-agent.jsx) — standalone tool for mining topics - Can also be done conversationally in a project chat
Problems We've Had
- Interview agent couldn't access project files. It's a standalone Sonnet instance. Fix: embed compressed asset examples in the system prompt, or use Opus.
- Brief was too generic. "Document your outcomes" level, not "here's the specific calculator concept with inputs and outputs." Fix: the brief needs to be specific enough to build from, not just a direction-setter.
- Concept drifted during build. The Subtract/Add campaign started with a clear equation but the article went through multiple structural rewrites because the concept wasn't fully locked before building started. Fix: don't start building HTML until the narrative arc is written in prose.
- Case material wasn't ready. Multiple campaigns hit a wall because the anonymized client case for the briefing hadn't been developed. Fix: the case needs to be outlined in Phase 0, not invented during the briefing script phase.
PHASE 1: CAMPAIGN ARCHITECTURE
Input: Campaign Input Brief Output: Campaign Map + Narrative Arc
What Happens
The brief gets translated into a campaign map (what assets, what sequence, what links to what) and a narrative arc for the thought leadership article. Decisions get locked here — not during the build.
The Deliverables
Campaign Map — visual or structured document showing:
- Campaign week schedule (Tuesday start, Sunday briefing expiration)
- Asset inventory (article, tool 1, tool 2, briefing interactive, briefing landing page)
- Distribution inventory (3 LinkedIn posts, 5 emails, 3 Substack notes, DM sequence)
- Conversion paths (article → tool → diagnostic, hand-raiser → DM → tool → diagnostic)
- Links between assets (what references what, what the expired state points to)
Narrative Arc — prose outline of the article:
- Working title
- The tension / opening hook
- Section-by-section beats (what the reader learns, what shifts)
- Interactive element concepts (what earns its place, what it reveals)
- Offer bridge (how the article connects to the diagnostic)
The Weekly Rhythm
| Day | Other | ||
|---|---|---|---|
| Monday | — | — | Nothing. People catching up. |
| Tuesday | — | Content Launch (article + tool) | Campaign launches |
| Wednesday | TL Post #1 (no CTA, article link in first comment) | — | Live workshop (11am ET, optional) |
| Thursday | — | Drive to LinkedIn or Story | — |
| Friday | Hand-raiser post (keyword → DM → tool) | Direct Ask | — |
| Saturday | — | Briefing launch email | Briefing video goes live |
| Sunday | — | — | Briefing expires at midnight ET |
Note: This rhythm evolved across campaigns. The Subtract/Add campaign tried to launch everything simultaneously. By the Silent List campaign, the sequence was staggered. The staggered approach works better — each day has one primary action.
Skills & References
- Distribution System Master (
distribution-system-master.md) — the channel architecture - Campaign Map examples from Subtract/Add and Silent List campaigns
- LinkedIn TL Post System Instructions (
linkedin-tl-post-system-instructions.md)
Problems We've Had
- Campaign map had inconsistencies. The Proof Gap map referenced the "calculator" in multiple places after the decision was made to build the Story Finder instead. Fix: when an asset name changes, search-and-replace across the entire map.
- Too many open questions at build time. The Proof Gap map had six unanswered decisions (keyword, briefing case, tool names) when we started building. Fix: resolve ALL open questions in the campaign map before moving to Phase 2. No building with TBDs.
- Campaign launch date kept slipping. Proof Gap was supposed to launch Feb 9, nothing was built. Fix: the campaign map should include a realistic build schedule, not just a distribution schedule.
- Tried to build everything at once. The Subtract/Add campaign attempted to produce 30+ distribution assets before the core assets (article, tools) were finished. Fix: build in phases. Core assets first, distribution second, evergreen third.
PHASE 2: INTERACTIVE THOUGHT LEADERSHIP ARTICLE
Input: Narrative Arc from Phase 1 Output: Ship-ready HTML article
What Happens
The article is the campaign's anchor asset. It's an interactive HTML page with embedded visualizations that the reader engages with while reading. It follows the no-act-structure format (title only, no Act I/II/III labels) and alternates between light and dark sections.
Build Sequence
- Write the full narrative in prose first. Not HTML. Not code. Just the story, section by section. Get the voice right before touching any code.
- Design the interactive elements. For each section, decide: does this need a visualization? What does the reader interact with? What does it reveal? Use the "earns its place" test — if the element doesn't create a moment of revelation that text alone can't, cut it.
- Build the HTML. Follow the Interactive Builder SOP and the interactive-narrative-SKILL.md. Golden examples are the standard.
- QC pass — three checklists, every build. Run all three before flagging the article as ready:
copy-qc.md— AI pattern detection (twinning, mirror reversals, three-beat parallels, correction-revelation compound check). This is the one that catches the most issues.04-aos-article-quality.md— Narrative architecture, interactive quality, prose quality, structural elements, theme alternation. Covers hero subtitle, section headlines, prose-interactive-prose sandwich, comparison table placement.aos-brand-kit-qc-v1.md— Visual brand compliance (colors, typography, layout, signature elements). Cross-reference against golden example for established precedent before flagging failures. Fix everything that fails before moving to Phase 3. Do not defer.
- Polish pass. Brand compliance (never gold text on cream backgrounds — use #6b5d3e), footer matches standard, nav matches standard, all links work.
Skills & References
- Interactive Builder SOP (
InteractiveBuilderSOP.html) - Interactive Narrative Skill (
interactive-narrative-SKILL.md) - Copy QC Agent (
copy-qc-agent.md) - Golden examples: Subtract/Add article, Proof Gap article, No-Show Revival article
Problems We've Had
- Built HTML before the narrative was solid. The No-Show Revival article went through three full rebuilds because the narrative arc kept changing. Fix: write the full story in prose, get approval, THEN build HTML.
- Gold text on cream backgrounds. Happened on every single build.
#b79d64on#f5f4f0fails WCAG contrast. Fix: use#6b5d3e(dark olive-gold) for all text on light backgrounds. This is in the brand specs — check every time. - Interactive elements that don't earn their place. Early builds included hover effects and animations that added visual interest but no insight. Fix: every interactive element must create a revelation the reader couldn't get from static text.
- AI copy patterns survived QC. Twinning ("It's not X. It's Y."), three-beat parallel structures, and phrases like "here's the thing" made it into published versions. Fix: run the copy-qc-agent AFTER the narrative is written but BEFORE it goes into HTML. Easier to fix in markdown than in embedded HTML strings.
- Fabricated statistics. The Subtract/Add campaign included numbers that were invented ("40% of practices" "68% average") and had to be caught and removed. Fix: every number in the article must either come from the source material or be explicitly marked as illustrative. No plausible-sounding-but-made-up statistics.
- React visualization errors. Interactive elements using React via CDN had rendering issues, particularly with state management in complex visualizations. Fix: keep visualizations simple. If a React component needs more than 100 lines, it's too complex for an inline article element.
- Paragraph essay format instead of Steve Cunningham's structure. Early LinkedIn post drafts and article sections used flowing narrative paragraphs. Fix: short lines, white space, numbered lists, contrast. Apply this to article body copy too, not just LinkedIn.
- Footer and nav inconsistencies. Different builds used different footer formats, different nav structures, different link sets. Fix: copy the footer and nav from the most recent golden example. Don't recreate from memory.
PHASE 3: MICRO-TOOLS
Input: Tool concepts from Phase 1 Output: Two ship-ready HTML tools (typically one revelation tool + one calculator/diagnostic)
What Happens
Each campaign produces two micro-tools. They serve different purposes in the funnel: one is typically the hand-raiser deliverable (sent via DM after someone comments a keyword), the other is referenced in the article or briefing. Both drive toward the Systems Diagnostic.
Tool Types
| Type | Purpose | Example |
|---|---|---|
| Revelation Tool | "Find your number" — user inputs data, tool reveals something they didn't know | Silent List Finder, Story Finder |
| Calculator | "Calculate the cost" — user inputs data, tool quantifies the gap | Dead List Calculator, Subtract/Add Calculator |
| Diagnostic | "Score your situation" — user answers questions, tool categorizes their state | Subtract/Add Diagnostic, HQP Scorer |
Build Sequence
- Write the spec. Inputs, outputs, calculation logic, result tiers, copy for each state. Get this approved before touching code.
- Build the HTML. Follow the micro-tool-SKILL.md. Single file, no external dependencies, all CSS custom properties hardcoded (not
var()— micro-tools use JS-generated HTML that breaksvar()references). - Math validation. Run every input combination through the calculator. Check edge cases (zero inputs, maximum inputs, minimum inputs). Verify the output ranges are credible.
- Visual QC. Brand compliance, contrast, responsiveness, all interactive states work.
- Copy QC. Run copy-qc-agent against all visible text, including result descriptions and bridge copy.
Skills & References
- Micro-Tool Skill (
micro-tool-SKILL.md) - Revelation Tool Skill (
revelation-tool-SKILL.md) - Golden examples: Subtract/Add Calculator, Subtract/Add Diagnostic, Silent List Finder, Dead List Calculator
Problems We've Had
- Math didn't hold up. The Silent List Finder's revenue estimates went through three rounds of recalibration because the initial ranges were either too conservative or too aggressive. Fix: build a test matrix. Run 5–10 realistic input combinations. Compare outputs to what a real practice would expect.
- Tool descriptions in campaign copy didn't match actual tool functionality. Emails and LinkedIn posts described features the tool didn't have. Fix: write tool descriptions AFTER the tool is built and tested, not before. Reference the actual inputs and outputs.
var()CSS custom properties broke in JS-generated HTML. Micro-tools useinnerHTMLfor dynamic content, which doesn't inherit CSS custom properties from the root. Fix: hardcode all hex values in micro-tools. Never usevar().- Hero unit formula display. The Dead List Calculator went through four versions partly because the hero unit didn't show the full input chain (which numbers produced the result). Fix: always show the user's inputs in the result display so they can verify the math.
- Bridge copy used three-beat parallel pattern. The transition from calculator results to the CTA used "A [X] needs [Y]. A [X] needs [Y]. A [X] needs [Y]." — caught in QC. Fix: break the third item in any three-part pattern. Restructure the sentence.
PHASE 4: BRIEFING
Input: Campaign case material, completed article and tools Output: Briefing interactive (recording visual) + Briefing script + Briefing landing page
What Happens
The briefing is a 15–20 minute screen-recorded video. It consists of three assets that work together: the interactive HTML the presenter scrolls through (the visual layer), the script the presenter reads (the audio layer), and the landing page where the video lives (the delivery layer).
4A: Briefing Interactive (Recording Visual)
The presenter opens this HTML in Chrome, records their screen, and narrates over it. All interactions are hover-driven or scroll-triggered — no buttons, no clicks.
Critical distinction: The briefing is a behind-the-scenes client walkthrough, NOT the article restated with different CSS. The article teaches the framework. The briefing shows what happened when you applied it to a real client.
Build sequence:
- Map the 7-beat narrative arc (Practice → Discovery → Findings → Rewrite → Results → Gap → Decision)
- Select interaction patterns per beat (profile cards, stagger lists, hover-expand, side-by-side zones, summary numbers, layer stack, offer cards)
- Build the HTML following briefing-interactive-SKILL.md
- QC against the checklist
4B: Briefing Script
Word-for-word voiceover synced to the interactive HTML. Stage directions reference specific scroll and hover targets in the HTML.
Build sequence:
- Read the interactive HTML — note every section, every hover element
- Write the client story (who, what they thought the problem was, what you found, what you changed, what happened, where they got stuck)
- Write beat by beat with stage directions
- Read aloud while scrolling the HTML — verify sync
- Write production notes
4C: Briefing Landing Page
Timer-based delivery wrapper. Two states: watch (video + countdown) and expired (four path cards to campaign assets).
Build sequence:
- Copy the template (Subtract/Add briefing landing page)
- Swap content: title, subtitle, context section, expired path cards, links
- Set expiration date in CONFIG (UTC)
- Placeholder Vimeo URL until recording is done
Skills & References
- Briefing Interactive Skill (
briefing-interactive-SKILL.md) - Briefing Interactive QC (
qc-checklist-briefing-interactive.md) - Briefing Script Skill (
briefing-script-SKILL.md) - Briefing Script QC (
qc-checklist-briefing-script.md) - Briefing Landing Page Skill (
briefing-landing-page-SKILL.md) - Briefing Landing Page QC (
qc-checklist-briefing-landing-page.md) - Golden examples: Proof Gap Briefing (interactive), Subtract/Add Briefing (landing page), Silent List Briefing (interactive + landing page)
Problems We've Had
- Built a button-driven interactive instead of hover/scroll. The first Silent List briefing interactive had "Reveal Next Category" buttons and click-triggered toggles. Wrong interaction model entirely — the Proof Gap golden example uses only hover and scroll. Fix: always read the golden example before building. The interaction model is hover + scroll, never buttons.
- Briefing content repeated the article. The first Silent List briefing draft restated the five wound types as a framework instead of showing what happened with a specific client. It was the article in a dark theme. Fix: the test is "if I deleted the article, would this briefing still make sense as a standalone story?" Both should pass that test independently.
- Script stage directions didn't match the interactive. The script said
[CLICK]and[REVEAL]but the interactive uses hover-expand and scroll-reveal. Fix: stage direction vocabulary is[SCROLL],[HOVER on X],[PAUSE]— nothing else. - Briefing landing page had wrong expiration date. Copy-paste from previous campaign without updating CONFIG. Fix: the pre-deploy checklist explicitly checks that CONFIG date matches hero subtext.
- Expired path cards described tools inaccurately. Card descriptions said the tool did things it didn't actually do. Fix: write expired path card copy AFTER the tools are built and tested.
PHASE 5: DISTRIBUTION CONTENT
Input: All core assets (article, tools, briefing) completed Output: LinkedIn posts, emails, Substack notes, DM sequence, LinkedIn image card
What Happens
Distribution content is the last thing built because it references the core assets. Every claim in a LinkedIn post, every tool description in an email, every link in a DM must point to something that exists and works.
Asset Inventory Per Campaign
| Asset | Count | Notes |
|---|---|---|
| LinkedIn TL posts | 2–3 | Mon/Wed, no CTA, Steve Cunningham format |
| LinkedIn hand-raiser post | 1 | Friday, keyword → DM → tool |
| LinkedIn image card | 1 | HTML-rendered, screenshot at 1200×1200 |
| Emails | 5 | Content Launch, Drive to LinkedIn, Story, Direct Ask, Briefing Launch |
| Substack notes | 3 | Cross-post LinkedIn insights |
| DM sequence | 4–5 messages | Keyword trigger → qualify → tool link → bridge → diagnostic |
Build Sequence
- LinkedIn TL posts first. They set the week's narrative. No CTA, article link in first comment only.
- Hand-raiser post. Keyword, tool deliverable, DM track.
- Emails. In send order (Tuesday through Saturday).
- Substack notes. Cross-posted from LinkedIn insights with minor adaptation.
- DM sequence. Trigger → qualify → deliver → bridge → offer.
- LinkedIn image card. HTML file, screenshot for upload.
QC Pass — per asset, every build
Each distribution asset gets its own QC before shipping:
- LinkedIn TL posts:
copy-qc.md(all 11 patterns + compound check) +linkedin-tl-post-system-instructions.mdprocess checklist (hook quality, first line, structure, word count) - LinkedIn image cards:
qc-checklist-linkedin-image.md(2-second test, text content, mobile readability, visual brand, structure, redundancy) - LinkedIn hand-raisers:
copy-qc.md+qc-checklist-linkedin-handraiser.md - Emails:
copy-qc.md+ subject line check (curiosity-driven, not descriptive) - DM sequences:
copy-qc.md(especially Pattern 6 over-validation) - Substack notes:
copy-qc.md
After fixing any QC failures, re-run the same QC to verify. The final action before "ship-ready" is a clean pass — not a fix.
Skills & References
- LinkedIn TL Post System Instructions (
linkedin-tl-post-system-instructions.md) - LinkedIn Hand-Raiser Skill (
linkedin-handraiser-SKILL.md) - LinkedIn Sentence Editor Skill (
linkedin-sentence-editor-skill.md) - Copy QC Agent (
copy-qc-agent.md) - Distribution System Master (
distribution-system-master.md) - DM System Agent (
micro-tool-dm-system-agent.md)
Problems We've Had
- Distribution written before assets were finalized. Subtract/Add emails described calculator features that changed during the build. Fix: Phase 5 doesn't start until Phase 2–4 assets are ship-ready.
- Fabricated numbers in LinkedIn posts. Friday hand-raiser for Subtract/Add included invented statistics. Fix: every number must trace to a source or be removed.
- LinkedIn post used paragraph essay format. Multiple drafts were narrative paragraphs instead of Steve's short-line format. Fix: every LinkedIn post gets checked against Steve Cunningham's structure before approval.
- Image card looked identical to a previous campaign. Same "two-line quote on dark background" format repeated. Fix: vary the image format between campaigns.
- DM sequence went too long before delivering value. Early versions had 3 qualifying messages before the tool link. Fix: tool link in DM 1 or DM 2, max. Qualify by what they respond to, not by gating access.
- Email subject lines were descriptive, not curiosity-driven. "The Subtract/Add Equation" as a subject line vs. "Which deliverable do your clients actually miss?" Fix: subjects should be questions or incomplete loops.
- Substack notes were an afterthought. Multiple campaigns forgot to include them. Fix: they're on the asset inventory. Three per campaign.
- QC ran once but not after fixes. LinkedIn post failed copy QC (three-beat parallels, dramatic fragments). Fixes were applied but QC wasn't re-run until prompted. The second pass caught two remaining failures. Fix: QC always runs twice minimum — once after build, once after fixes. No asset ships on a fix pass alone.
PHASE 6: QC & POLISH
Input: All assets built Output: Ship-ready campaign
What Happens
A systematic QC pass across every asset. This is not "does it look okay" — it's running each asset through its specific checklist.
QC Sequence
- Article: Interactive narrative QC + copy QC + brand compliance
- Tool 1: Math validation + visual QC + copy QC
- Tool 2: Math validation + visual QC + copy QC
- Briefing interactive: Briefing interactive QC checklist
- Briefing script: Briefing script QC checklist (including script-to-interactive sync)
- Briefing landing page: Landing page QC checklist (both states tested)
- Distribution content: Copy QC on every piece + link verification
- Cross-asset congruence: Numbers match across assets. Tool descriptions match actual tools. Case details are consistent within each asset (article case ≠ briefing case, but each is internally consistent).
The QC Files
| Asset Type | Skill File | QC Checklist |
|---|---|---|
| Article | interactive-narrative-SKILL.md | (in Interactive Builder SOP) |
| Micro-tools | micro-tool-SKILL.md, revelation-tool-SKILL.md | (embedded in skill files) |
| Briefing interactive | briefing-interactive-SKILL.md | qc-checklist-briefing-interactive.md |
| Briefing script | briefing-script-SKILL.md | qc-checklist-briefing-script.md |
| Briefing landing page | briefing-landing-page-SKILL.md | qc-checklist-briefing-landing-page.md |
| Evergreen page + distribution | evergreen-assembly-SKILL.md | qc-checklist-evergreen-assembly.md |
| All copy | copy-qc-agent.md | — |
| LinkedIn posts | linkedin-tl-post-system-instructions.md | — |
| Hand-raisers | linkedin-handraiser-SKILL.md | — |
Problems We've Had
- QC was done piecemeal, not systematically. Assets were "reviewed" but not run through checklists. Issues that a checklist would catch (gold text on cream, wrong footer links, fabricated stats) made it to the version labeled "final." Fix: every asset gets its specific QC checklist run, not a general "looks good."
- Cross-asset congruence was never checked. An email described a tool that had been renamed. An expired path card described functionality the tool didn't have. Fix: add a congruence pass as the final QC step.
- Copy QC happened too late. AI patterns baked into HTML were harder to fix than catching them in a markdown draft. Fix: run copy QC on prose before it goes into HTML.
PHASE 7: EVERGREEN ASSEMBLY
Input: Completed campaign (post-launch week) Output: Evergreen page (single HTML) + Evergreen distribution set (multi-angle content menu)
What Happens
Two builds. First, the evergreen page — a single HTML that recomposes the article's narrative with both tools embedded inline, a briefing section, a conversion gap, and a CTA. This becomes the campaign's permanent home, replacing the standalone article as the primary link. Second, the distribution set — multiple angle variations of posts, emails, and notes so the campaign can be rerun with different hooks pointing to the evergreen page.
7A: The Evergreen Page
The article, tools, and briefing were built as separate HTML files during the launch week. The evergreen page weaves them into one scrollable experience.
What happens to the article: Sections get condensed (~2 paragraphs max between tools), rewritten to point toward the next tool rather than stand alone. The article's conclusion gets removed (the tools replace it). The article's CTA gets replaced by the conversion gap + CTA sections. Article visualizations (React interactives) are preserved if they do persuasion work.
What happens to the tools: Standalone chrome (nav, footer, header) stripped. CSS/JS namespaced with prefixes (calc-, diag-, sf-, etc.) to avoid collisions when two tools share one page. Tool intro screens preserved — they co-exist with the narrative setup above. No var() in innerHTML.
Section arc: Hero → Narrative → Tool 1 → Narrative → Tool 2 → Briefing → Conversion Gap → CTA. Backgrounds alternate cream/dark/off-white. Tool sections always dark. Never two adjacent sections with the same background.
Conversion gap: Three points, each following "You can [X]. You can't [Y] — because [Z]." Leads directly to CTA with no intervening narrative.
Section nav: Fixed/sticky, highlights on scroll. Both tools, briefing, and diagnostic get nav links. Hero and conversion gap do not.
7B: The Distribution Set
Angle expansion: Mine angles from briefing (richest source), article, tools, and diagnostic results. Filter to 3–5 distinct entry emotions. Write fully for each.
Content menu per campaign:
- 6–10 LinkedIn TL posts (different angles, Steve format, first comment links to evergreen page)
- 3–5 hand-raiser posts (unique keywords per post)
- 10–15 emails (2–3 alternates per slot × 5 slots, date-agnostic)
- 9 Substack notes (3 per slot × 3 slots)
- DM openers per keyword (base sequence carries over)
Campaign combos: 3+ pre-assembled weekly cycles. Each combo: 2 TL posts + 1 hand-raiser + 5 emails + 3 notes, tonally cohesive from one emotional family.
Skills & References
- Evergreen Assembly Skill (
evergreen-assembly-SKILL.md) — page build rules + distribution process - Evergreen Assembly QC (
qc-checklist-evergreen-assembly.md) - Golden examples: Subtract/Add Evergreen, Proof Gap Evergreen
Problems We've Had
- Built the distribution set without building the evergreen page. The Subtract/Add campaign produced 30+ distribution assets but the evergreen page wasn't assembled until the Proof Gap campaign. Fix: the page is the primary deliverable. Distribution follows.
- Pasted the article instead of recomposing it. First Proof Gap evergreen attempt dropped the React visualizations and replaced them with static text. Felt flat. Fix: article visualizations carry over if they do persuasion work. Narrative sections condense but don't lose the argument.
- CSS/JS collisions between inline tools. Two tools sharing generic class names (
.question,.result) overrode each other. Fix: namespace everything with tool-specific prefixes. var()in innerHTML broke tool styling. JS-generated HTML doesn't inherit CSS custom properties. Fix: hardcode hex values.- Evergreen conversion was never completed on first campaign. The Subtract/Add produced a distribution package but the evergreen page assembly was deferred. Fix: Phase 7 is a defined deliverable with both page and distribution, not a "we'll get to it."
- Angle variations were too similar. Multiple LinkedIn posts felt like the same post with different opening lines. Fix: entry emotion test — each angle must target a different feeling.
- Distribution still linked to standalone article. After the evergreen page was live, some distribution content still pointed to the old article URL. Fix: update all first-comment links, email links, and DM links to the evergreen page.
LIVING DOCUMENT PRACTICES
During Each Build Session
Maintain a working document that captures:
- Decisions made (and the reasoning)
- Open items (questions that need answers before proceeding)
- Rejected ideas (and why — prevents revisiting dead ends)
- Asset status (built / in QC / ship-ready)
Update and output the working document at the end of every session.
Across Campaigns
After each campaign, update:
- This workflow (new problems encountered, new solutions)
- Skill files (if a build pattern was refined)
- QC checklists (if a new failure mode was discovered)
- Golden example inventory (if this campaign produced a better example)
FULL ASSET INVENTORY PER CAMPAIGN
For reference — the complete set of assets a fully-built campaign produces:
Core Assets (Phases 2–4):
- [ ] Interactive thought leadership article (HTML)
- [ ] Micro-tool #1 — revelation tool or calculator (HTML)
- [ ] Micro-tool #2 — calculator or diagnostic (HTML)
- [ ] Briefing interactive — recording visual (HTML)
- [ ] Briefing script — voiceover narrative (Markdown)
- [ ] Briefing landing page — timer-based delivery (HTML)
Distribution Assets (Phase 5):
- [ ] LinkedIn TL post #1 (no CTA)
- [ ] LinkedIn TL post #2 (no CTA)
- [ ] LinkedIn hand-raiser post (keyword → DM)
- [ ] LinkedIn image card (HTML → screenshot)
- [ ] Email 1 — Content Launch
- [ ] Email 2 — Drive to LinkedIn
- [ ] Email 3 — Story / Insight
- [ ] Email 4 — Direct Ask
- [ ] Email 5 — Briefing Launch
- [ ] Substack note #1
- [ ] Substack note #2
- [ ] Substack note #3
- [ ] DM sequence (4–5 messages)
Evergreen Assets (Phase 7):
- [ ] Evergreen page (single HTML — article recomposed with inline tools, briefing, gap, CTA)
- [ ] 2–3 additional LinkedIn TL posts (new angles)
- [ ] 1–2 additional hand-raiser posts (different keywords)
- [ ] Email variations per angle (date-agnostic)
- [ ] Substack note variations per angle
- [ ] DM opener variations per keyword
- [ ] Suggested campaign combos (pre-assembled weekly cycles)
Total: ~25–30 individual assets per campaign + 1 evergreen page
BUILD ORDER SUMMARY
Phase 0: Concept Extraction → Campaign Input Brief
Phase 1: Campaign Architecture → Campaign Map + Narrative Arc
Phase 2: Article Build → Ship-ready interactive HTML
Phase 3: Micro-Tools Build → Two ship-ready HTML tools
Phase 4: Briefing Build → Interactive + Script + Landing Page
Phase 5: Distribution Content → LinkedIn, Email, Substack, DMs
Phase 6: QC & Polish → Systematic checklist pass on everything
Phase 7: Evergreen Assembly → Page build (article + inline tools) + angle expansion + combos
The rule: Each phase's output is the next phase's input. Don't skip ahead. The most expensive mistake in this workflow is building assets before upstream decisions are locked.
The QC rule: Every asset follows Build → QC → Fix → Re-QC → Ship. QC runs after the first build AND after every round of fixes. No asset ships on a fix pass alone — the final action before "ship-ready" is always a clean QC pass with zero failures. This applies to every phase that produces an asset (Phases 2–5, 7).
Workflow document — first pass Produced from: Subtract/Add, Proof Gap, Politeness Premium, and Silent List campaigns Last updated: February 25, 2026