Preview: https://markdownlivepreview.com/
Concept Brief — The Site Survey
Date: 2026-04-10 Status: Draft — needs Kathryn validation Mode: Handraiser (free skill) with upgrade path to PBOS entry assessment Inspiration: Niklaus Serafino "Head of AI" prompt — conversational week-walk extraction technique. Adapted through AOS lens: maturity-first, constraint-first, produces infrastructure not reports.
The Problem
Practice owners know things aren't working but can't see WHERE the drag is coming from. Three reasons it stays invisible:
- They've never mapped their week against the three areas that actually drive a practice (offer, operations, pipeline) — so they optimize by feel, not by structure
- They confuse being busy with being productive — the work that eats the most hours is often the work with the lowest maturity (no SOP, no delegation, no system)
- They don't know what to build first — so they build nothing, or they build the wrong thing and lose momentum
What This Skill Does
The Site Survey is a conversational assessment that walks a practice owner through their week and produces a Build Order — a personalized, prioritized list of what to build first, second, and third in their practice.
It is NOT an audit. It does not produce a report to read. It produces a ranked sequence of specific systems to build — each one mapping to a real skill they can deploy inside Practice Builders OS or on their own.
Two jobs:
- Make the invisible visible. Walk through the week, surface every significant workflow, score each one against maturity (is there an SOP? is it delegated? is it automated?) and impact (does this touch clients? does this touch revenue? does this eat the owner's time?).
- Produce the Build Order. Rank the gaps. Tell them: build THIS first because it's the highest-impact, lowest-maturity system in your practice. Then THIS. Then THIS. Three builds. Specific. Sequenced. Ready to start.
IP Direction
| Concept | What It Captures | Source |
|---|---|---|
| Scalability Diagnostic | 3-area assessment (Offer & Positioning, Operations & Delivery, Pipeline & Conversion), scoring 35-175 | Gen 1 IP — business-aos/reference/core/ip-inventory.md. Needs AI-era refresh. |
| System Maturity Model | 4-level gated progression: Structure (SOP) → Streamline (AI) → Delegate (Agent) → Autonomous | advisory-os-vault/content/frameworks/kit-maturity-model/ |
| Ops OS Skill Scoring | 50 skills evaluated, top 15 ranked — the systems the Build Order should point to | campaigns/30-day/wip/ops-os-skill-scoring.md |
| 6 Capability Categories | Authority OS, Visibility OS, Ops OS, Prospecting OS, Services OS, Product OS | advisory-os-vault/CLAUDE.md — used across diagnostics and all client work |
| Constraint Priority Matrix | How AOS diagnoses and tiers constraints — the methodology behind "what to fix first" | advisory-os-vault/content/frameworks/kit-constraint-priority-matrix/ |
| "Walk me through your week" extraction | Conversational technique that surfaces workflows by walking Monday-Friday instead of asking people to self-diagnose from memory | Inspiration: Niklaus Serafino "Head of AI" prompt. Adapted: no AI tool recommendations, maturity scoring instead of "automation potential," Build Order output instead of report. |
IP Gaps
IP Gap: The Scalability Diagnostic scoring model (35-175) has not been documented in any vault file as a usable framework. The 3-area structure is referenced in
business-aos/decisions/2026-03-26-campaign-architecture-decisions.mdbut the actual questions, scoring weights, and thresholds are not in the vault. Content interview required to extract: the original diagnostic questions, how Kathryn scored them, what thresholds meant what, and what needs to change for the AI era.
IP Upgrade: The Maturity Model exists as an HTML visualization and a readiness assessment but hasn't been converted to a scoring rubric a skill can apply to individual workflows. Needed: a simple 1-4 scoring key that maps each workflow to a maturity level based on conversational answers (not self-assessment checkboxes).
Design Constraint Check
| Constraint | How This Skill Meets It |
|---|---|
| Can't fail | Conversational input — the skill asks questions one at a time, the owner answers by typing or dictating. No pasting, no data assembly, no spreadsheets. If you can describe your week, you can run this. Works with vague answers (the skill follows up). |
| Sustainable | Run quarterly. The practice changes — new clients, new team members, new tools. The Build Order updates each time. Section at the end compares to previous run if they paste a prior Build Order. |
| Win fast | First run produces a Build Order in ~15 minutes of conversation. The win: "I've been staring at 20 things I should fix. Now I know which 3 to build first and why." The Build Order is specific enough to start TODAY. |
Quality Bar
- The Build Order reads like output from a $500 operations diagnostic
- Each recommendation traces to a specific system they can build — not vague advice
- The maturity scoring reveals at least one "I didn't realize that had no system behind it" moment
- Paired with PBOS: the Build Order IS their onboarding. They know exactly which monthly build to start with
Foundational Skill Dependency
The Site Survey works WITHOUT the foundational skills (Service List, ICP, Voice). Answer the questions, get a Build Order.
It works BETTER with them. If the skill knows what services you offer, it can assess offer maturity. If it knows your ICP, it can evaluate pipeline-to-market fit. If it knows your team, it can assess delegation readiness per workflow.
For the 30-day campaign: Works standalone. No prerequisites beyond being a practice owner with a week to describe.
Inside Practice Builders OS: The Site Survey IS the entry assessment. New members run it first. The Build Order becomes their personalized path through the monthly build cycle. This is what makes PBOS retention stick — 12+ months of builds in the right sequence for THEIR practice.
The Skill Output (Sections)
| # | Section | Job |
|---|---|---|
| 1 | Practice Snapshot | Who you are, what you do, team size, client count — orientation |
| 2 | Weekly Time Map | Every significant workflow mapped: who does it, hours/week, which area it falls in (Offer / Ops / Pipeline) |
| 3 | Maturity Scores | Each workflow scored 1-4 on the maturity model (Structure / Streamline / Delegate / Autonomous) with one-line evidence from what they said |
| 4 | Area Summary | Aggregate score per area (Offer & Positioning, Operations & Delivery, Pipeline & Conversion) — where the practice is strong and where it's exposed |
| 5 | The Build Order | Top 3 systems to build, ranked by impact × maturity gap. Each one names the specific system, why it's first/second/third, and what changes when it's built |
| 6 | What's Working | 2-3 things that are already at high maturity — so the owner knows what NOT to touch |
| 7 | Quarterly Comparison | If they paste a prior Build Order: what moved, what didn't, what's next |
The Conversation Flow
The skill asks questions one at a time. Three phases.
Phase 1: Practice Snapshot (2-3 questions)
- What does your practice do and who do you serve?
- How many people — including you — work in the practice? What does each person do?
- Roughly how many active clients do you serve?
Phase 2: The Week Walk (5-8 questions, depends on depth)
- Walk me through a typical Monday morning. What's the first thing you do when you sit down? What tools do you open?
- Follow-up per workflow surfaced: Who does this besides you? Is there a documented process, or does it live in someone's head? How often does it happen?
- Now walk me through the rest of the week — what are the big blocks of work Tuesday through Friday?
- What about the recurring stuff that doesn't happen every week — monthly reporting, quarterly planning, annual reviews?
- What falls through the cracks? What gets dropped, delayed, or forgotten most often?
- Where does your team come to you when they shouldn't have to? What questions do they ask that a system should answer?
Phase 3: Build Order (no questions — analysis and output)
Score every workflow against maturity. Map to the three areas. Produce the Build Order.
Present the full output and ask: "Does this match your reality? Anything I got wrong or missed?"
Handraiser / PBOS Upgrade Path
As a handraiser: Free skill. Run it, get your Build Order, see what to build first. The Build Order names specific systems. Some of those systems ARE skills in the PBOS toolkit.
The bridge: "You've got your Build Order. Inside Practice Builders OS, each of these systems is a monthly build — with a live workshop, async support, and a skill you install and run. Your Build Order becomes your membership path."
Inside PBOS: The Site Survey runs again every quarter. The Build Order updates. Members always know what's next. This is the retention engine — not content, not community vibes, not drip courses. A living sequence of builds personalized to their practice.
Teaching Story
TBD — needs real testing.
Kathryn runs the skill on her own practice and reports:
- How long did the conversation take?
- Did the maturity scoring feel accurate?
- Did the Build Order surprise her or confirm what she already knew?
- Was the "what's working" section right?
- Would she hand this to a prospect and say "this is what we'd build first"?
Distribution
| Field | Value |
|---|---|
| Trigger word | TBD |
| Delivery URL | TBD |
| Cloudinary URL | TBD |
| Campaign position | 30-day build-in-public series |
Open Questions
- Name: "The Site Survey" is the PBOS language. For the handraiser, does it need a more descriptive name? ("Practice Operations Survey"? "Build Order Generator"? Or keep Site Survey — it's distinctive.)
- 3 areas vs 6 categories: The Scalability Diagnostic uses 3 areas (Offer, Ops, Pipeline). AOS uses 6 capability categories. Which framework for the skill? 3 is simpler for a free handraiser. 6 is more precise for PBOS.
- Audience shift: The ops-os-skill-scoring targets $100K-$500K. The existing AOS audience is $500K-$2M. The Site Survey should work for both — but the Build Order recommendations may differ. Is that a problem or a feature?
- Maturity scoring depth: Full 4-level model (Structure/Streamline/Delegate/Autonomous) or simplified 3-level for the free version (No System / Documented / Delegated)?
- Build Order specificity: Should the Build Order name specific skills from the ops-os-skill-scoring top 15? Or describe the system generically and let PBOS make the connection? Naming skills is more actionable but creates a harder dependency.
- Scalability Diagnostic IP: Kathryn needs to confirm whether the original diagnostic questions and scoring are worth preserving or if this is a clean-sheet rebuild.
- Relationship to Constraint Identifier: The Site Survey surfaces operational gaps. The Constraint Identifier extracts what's broken from a CEO conversation. Different inputs, overlapping territory. Is the Site Survey the DIY version of what the Constraint Identifier does with an advisor?
Next Steps
- [ ] Kathryn validates this brief
- [ ] Resolve open questions (especially #2, #4, #5)
- [ ] Kathryn tests the skill on her own practice
- [ ] Capture teaching story from test results
- [ ] Build delivery page, post, DM sequence through respective kits