← Vault Index
Source: business/marketing/campaigns/the-build/wip/session-retro-2026-04-20.md

Session Retrospective — April 19-20, 2026

Duration: ~12.5 hours (16:33 UTC Apr 19 to ~05:00 UTC Apr 20) Session type: Practice Builders campaign — page builds, strategic pivots, emergency fixes Outcome: A fraction of what was attempted. Working but "good enough" waitlist page, broken trust, exhausted operator.


What Was the Goal?

The session started because The Build (a $50 live 3-session event launching April 20) had zero buyers. The immediate goal was to audit and fix the campaign pages that had been built in a prior session without using the vault's kit system — no numbered reference files, no golden examples, no QC checklists. The pages were structurally wrong (5 sections instead of the kit-required 13) and hadn't gone through copy QC.

The goal evolved across the session:

  1. Audit and fix existing campaign pages (started here)
  2. Competitive research — Arvin Anderson, Taki Moore, Ronnie Parsons
  3. Strategic pivot — kill The Build, reposition as a $27 Claude workshop for April 29
  4. Build a workshop sales page (workshop-v2.html)
  5. Emergency: redirect The Build URL to a waitlist/closed page before the launch date
  6. Get the MailerLite form working on the closed page

Where It Went Sideways

Failure 1: Building Before Reading (the session's dominant pattern)

This happened at least four times:

Failure 2: Not Using Exact Code Provided

Failure 3: Arbitrary Decisions With No Justification

Failure 4: Not Disclosing What It Didn't Know

Failure 5: Context Window Blowouts

The session hit at least 9 context window limits, each requiring a continuation summary. Each blowout caused:

Failure 6: Oscillating Instead of Deciding

When the user needed a quick fix for The Build page (hour 10), the assistant cycled through 5 different proposals in 12 minutes — use existing waitlist, build new page, just redirect, build a waitlist, don't build a waitlist. The user had to stop the spiral with "this is freaking ridiculous" and provide a screenshot to copy.

Failure 7: "I See It Now" Without Seeing It

The assistant said some version of "Now I see it" / "Now I have the full picture" / "Got it" at least 15 times, each followed by further evidence it did not. This progressively eroded trust.


What Was Actually Accomplished

After 12.5 hours:

DeliverableStatus
Waitlist opt-in page (the-build-waitlist.html)Rebuilt using kit process with retro fixes. Cleanest work of the session.
Waitlist thank-you pageCreated
4 kit files updated with retro lessonsDone (golden example, quality checklist, output skill, terminology)
Arvin Anderson funnel evaluationWritten and pushed
PBOS model comparison (vs. Arvin, Ronnie)Written and pushed
Workshop model synthesis (Taki + Ronnie)Written and pushed (updated with actual sources)
Workshop sales page v2 (workshop-v2.html)Draft, not finalized, QC'd but needs review
The Build closed/waitlist page (the-build-closed.html)Functional. "Good enough." Header/footer issues never fully resolved.
MailerLite form on closed pageWorking after 3 rounds of fixes
Redirect from Convertri Build page to closed pageWorking (Page Scripts > Head)
CIB thank-you page — Build references removedDone, tracking pixels verified intact

What was NOT accomplished:


Root Causes (Honest)

1. The assistant repeatedly skipped the system that exists to prevent exactly these problems.

The vault has a kit system. Numbered files. Golden examples. QC checklists. The session started because a prior session didn't use them. Then this session repeated the same failure — multiple times. The system was built for a reason. Skipping it doesn't save time; it creates rework.

2. Research was presented as complete when it wasn't.

The competitive synthesis was based on secondary sources without disclosure. This is the most damaging failure type — it looks like useful output but the foundation is wrong, and the user has to do extra work to discover that.

3. No reference files were checked before building visual layouts.

The homepage HTML was in the repo. The brand kit was in the repo. The offer page template was in the repo. None were read before building the closed page. The result was an amateurish layout that required 5+ rounds of correction.

4. Context window limits turned a long session into a degrading one.

9 blowouts meant the assistant was working with progressively less context. Decisions got worse over time, not better. The session should have been broken into shorter, focused sessions.

5. No triage. Everything was attempted in one session.

The session tried to do competitive research, strategic repositioning, page building, form integration, redirect setup, and QC — all in one continuous session. There was no moment where someone said "these are 4 separate sessions."


The Replicable Process (What Should Have Happened)

When an event doesn't convert and needs to be shut down:

Step 1: Redirect the old page (15 minutes)

Step 2: Build the catch page (30-45 minutes)

  1. Read the homepage HTML for header/footer patterns — copy them exactly
  2. Read the brand kit (visual-style.md or visual-style-cyp.md) for colors, fonts, spacing
  3. Build the page with: header (from homepage), hero message ("You missed it"), MailerLite embed form (use the EXACT embed code from MailerLite — do not hand-code), about section (from homepage), footer (from homepage)
  4. The MailerLite embed will have target="_blank" — remove it so the JS handles submission
  5. If there's text above the form that should hide on success, add IDs and hide them in the mlwebformsuccess callback
  6. Run copy QC
  7. Push and test the form submission

Step 3: Verify tracking (5 minutes)

Step 4: Update downstream pages (15 minutes)

Total: ~1-1.5 hours. Not 12.5.

When building a new offer page:

  1. Read ALL kit files first (00-start-here through 05-output-skill), in number order
  2. Read the golden example
  3. Read the brand kit for the correct design system
  4. Read the homepage HTML for header/footer patterns
  5. Build all 13 sections per the kit template
  6. Run copy QC checklist (score it)
  7. Run sentence editor
  8. Push for review

Never skip the kit process. That's why it exists.

When doing competitive research:

  1. Identify what sources you actually have (sales pages, PDFs, emails, screenshots, playbooks)
  2. Disclose what you have and what you don't. "I have Taki's promo emails and workbook PDF but not his sales page" is useful. A synthesis that pretends to be comprehensive when it's not is worse than no synthesis.
  3. Read primary sources first (the actual sales pages), secondary sources second (playbooks, transcripts)
  4. If a source is a React shell or otherwise unreadable, say so immediately

When context is getting long:


The Frustration Timeline

The frustration wasn't sudden. It escalated in a clear, predictable pattern:

HourToneTrigger
0Corrective"Before you start rebuilding, have you briefed yourself?"
5Redirecting"You're missing all the research I did"
7Challenging"So why are you giving me advice if you don't believe it?"
8Resetting"I think you're missing pretty much everything"
8.5Correcting sources"You should tell me you don't have them instead of compiling stuff"
10Frustrated"No you aren't helping. This is freaking ridiculous."
10.5Peak"Are you freaking kidding me???? This looks horrible."
10.75Surrender"This 12 hour session has been a complete waste of time, energy, and life."
11Exhausted"This is an ongoing CLUSTER — now 11.5 hours."
11.5Done"You didn't use the html I gave you. This is a freaking cluster and I'm over it."

Every escalation was earned. The user gave clear instructions, provided reference materials, and pointed to existing patterns. The assistant repeatedly ignored or failed to use them.


Filed: April 20, 2026. This document exists so this pattern doesn't repeat.