← Vault Index
Source: business/marketing/campaigns/30-day/wip/skill-concept-brief-site-survey.md

Preview: https://markdownlivepreview.com/

Concept Brief — The Site Survey

Date: 2026-04-10 Status: Draft — needs Kathryn validation Mode: Handraiser (free skill) with upgrade path to PBOS entry assessment Inspiration: Niklaus Serafino "Head of AI" prompt — conversational week-walk extraction technique. Adapted through AOS lens: maturity-first, constraint-first, produces infrastructure not reports.


The Problem

Practice owners know things aren't working but can't see WHERE the drag is coming from. Three reasons it stays invisible:

  1. They've never mapped their week against the three areas that actually drive a practice (offer, operations, pipeline) — so they optimize by feel, not by structure
  2. They confuse being busy with being productive — the work that eats the most hours is often the work with the lowest maturity (no SOP, no delegation, no system)
  3. They don't know what to build first — so they build nothing, or they build the wrong thing and lose momentum

What This Skill Does

The Site Survey is a conversational assessment that walks a practice owner through their week and produces a Build Order — a personalized, prioritized list of what to build first, second, and third in their practice.

It is NOT an audit. It does not produce a report to read. It produces a ranked sequence of specific systems to build — each one mapping to a real skill they can deploy inside Practice Builders OS or on their own.

Two jobs:

  1. Make the invisible visible. Walk through the week, surface every significant workflow, score each one against maturity (is there an SOP? is it delegated? is it automated?) and impact (does this touch clients? does this touch revenue? does this eat the owner's time?).
  1. Produce the Build Order. Rank the gaps. Tell them: build THIS first because it's the highest-impact, lowest-maturity system in your practice. Then THIS. Then THIS. Three builds. Specific. Sequenced. Ready to start.

IP Direction

ConceptWhat It CapturesSource
Scalability Diagnostic3-area assessment (Offer & Positioning, Operations & Delivery, Pipeline & Conversion), scoring 35-175Gen 1 IP — business-aos/reference/core/ip-inventory.md. Needs AI-era refresh.
System Maturity Model4-level gated progression: Structure (SOP) → Streamline (AI) → Delegate (Agent) → Autonomousadvisory-os-vault/content/frameworks/kit-maturity-model/
Ops OS Skill Scoring50 skills evaluated, top 15 ranked — the systems the Build Order should point tocampaigns/30-day/wip/ops-os-skill-scoring.md
6 Capability CategoriesAuthority OS, Visibility OS, Ops OS, Prospecting OS, Services OS, Product OSadvisory-os-vault/CLAUDE.md — used across diagnostics and all client work
Constraint Priority MatrixHow AOS diagnoses and tiers constraints — the methodology behind "what to fix first"advisory-os-vault/content/frameworks/kit-constraint-priority-matrix/
"Walk me through your week" extractionConversational technique that surfaces workflows by walking Monday-Friday instead of asking people to self-diagnose from memoryInspiration: Niklaus Serafino "Head of AI" prompt. Adapted: no AI tool recommendations, maturity scoring instead of "automation potential," Build Order output instead of report.

IP Gaps

IP Gap: The Scalability Diagnostic scoring model (35-175) has not been documented in any vault file as a usable framework. The 3-area structure is referenced in business-aos/decisions/2026-03-26-campaign-architecture-decisions.md but the actual questions, scoring weights, and thresholds are not in the vault. Content interview required to extract: the original diagnostic questions, how Kathryn scored them, what thresholds meant what, and what needs to change for the AI era.

IP Upgrade: The Maturity Model exists as an HTML visualization and a readiness assessment but hasn't been converted to a scoring rubric a skill can apply to individual workflows. Needed: a simple 1-4 scoring key that maps each workflow to a maturity level based on conversational answers (not self-assessment checkboxes).


Design Constraint Check

ConstraintHow This Skill Meets It
Can't failConversational input — the skill asks questions one at a time, the owner answers by typing or dictating. No pasting, no data assembly, no spreadsheets. If you can describe your week, you can run this. Works with vague answers (the skill follows up).
SustainableRun quarterly. The practice changes — new clients, new team members, new tools. The Build Order updates each time. Section at the end compares to previous run if they paste a prior Build Order.
Win fastFirst run produces a Build Order in ~15 minutes of conversation. The win: "I've been staring at 20 things I should fix. Now I know which 3 to build first and why." The Build Order is specific enough to start TODAY.

Quality Bar


Foundational Skill Dependency

The Site Survey works WITHOUT the foundational skills (Service List, ICP, Voice). Answer the questions, get a Build Order.

It works BETTER with them. If the skill knows what services you offer, it can assess offer maturity. If it knows your ICP, it can evaluate pipeline-to-market fit. If it knows your team, it can assess delegation readiness per workflow.

For the 30-day campaign: Works standalone. No prerequisites beyond being a practice owner with a week to describe.

Inside Practice Builders OS: The Site Survey IS the entry assessment. New members run it first. The Build Order becomes their personalized path through the monthly build cycle. This is what makes PBOS retention stick — 12+ months of builds in the right sequence for THEIR practice.


The Skill Output (Sections)

#SectionJob
1Practice SnapshotWho you are, what you do, team size, client count — orientation
2Weekly Time MapEvery significant workflow mapped: who does it, hours/week, which area it falls in (Offer / Ops / Pipeline)
3Maturity ScoresEach workflow scored 1-4 on the maturity model (Structure / Streamline / Delegate / Autonomous) with one-line evidence from what they said
4Area SummaryAggregate score per area (Offer & Positioning, Operations & Delivery, Pipeline & Conversion) — where the practice is strong and where it's exposed
5The Build OrderTop 3 systems to build, ranked by impact × maturity gap. Each one names the specific system, why it's first/second/third, and what changes when it's built
6What's Working2-3 things that are already at high maturity — so the owner knows what NOT to touch
7Quarterly ComparisonIf they paste a prior Build Order: what moved, what didn't, what's next

The Conversation Flow

The skill asks questions one at a time. Three phases.

Phase 1: Practice Snapshot (2-3 questions)

Phase 2: The Week Walk (5-8 questions, depends on depth)

Phase 3: Build Order (no questions — analysis and output)

Score every workflow against maturity. Map to the three areas. Produce the Build Order.

Present the full output and ask: "Does this match your reality? Anything I got wrong or missed?"


Handraiser / PBOS Upgrade Path

As a handraiser: Free skill. Run it, get your Build Order, see what to build first. The Build Order names specific systems. Some of those systems ARE skills in the PBOS toolkit.

The bridge: "You've got your Build Order. Inside Practice Builders OS, each of these systems is a monthly build — with a live workshop, async support, and a skill you install and run. Your Build Order becomes your membership path."

Inside PBOS: The Site Survey runs again every quarter. The Build Order updates. Members always know what's next. This is the retention engine — not content, not community vibes, not drip courses. A living sequence of builds personalized to their practice.


Teaching Story

TBD — needs real testing.

Kathryn runs the skill on her own practice and reports:


Distribution

FieldValue
Trigger wordTBD
Delivery URLTBD
Cloudinary URLTBD
Campaign position30-day build-in-public series

Open Questions

  1. Name: "The Site Survey" is the PBOS language. For the handraiser, does it need a more descriptive name? ("Practice Operations Survey"? "Build Order Generator"? Or keep Site Survey — it's distinctive.)
  2. 3 areas vs 6 categories: The Scalability Diagnostic uses 3 areas (Offer, Ops, Pipeline). AOS uses 6 capability categories. Which framework for the skill? 3 is simpler for a free handraiser. 6 is more precise for PBOS.
  3. Audience shift: The ops-os-skill-scoring targets $100K-$500K. The existing AOS audience is $500K-$2M. The Site Survey should work for both — but the Build Order recommendations may differ. Is that a problem or a feature?
  4. Maturity scoring depth: Full 4-level model (Structure/Streamline/Delegate/Autonomous) or simplified 3-level for the free version (No System / Documented / Delegated)?
  5. Build Order specificity: Should the Build Order name specific skills from the ops-os-skill-scoring top 15? Or describe the system generically and let PBOS make the connection? Naming skills is more actionable but creates a harder dependency.
  6. Scalability Diagnostic IP: Kathryn needs to confirm whether the original diagnostic questions and scoring are worth preserving or if this is a clean-sheet rebuild.
  7. Relationship to Constraint Identifier: The Site Survey surfaces operational gaps. The Constraint Identifier extracts what's broken from a CEO conversation. Different inputs, overlapping territory. Is the Site Survey the DIY version of what the Constraint Identifier does with an advisor?

Next Steps