← Vault Index
Source: business/marketing/skills/website-eval-skill.md

type: skill name: website-eval description: "Brutally honest, multi-expert website evaluation. 6-step workflow: define target, assemble panel, run evaluation, prescribe fixes, benchmark, consolidate. Every page must pass this before it goes live." status: active


Website Evaluation Skill

AI WORKFLOW: URL-IN → BRUTALLY HONEST, MULTI-EXPERT REVIEW + FIX PLAN

Run this on every page before pushing to Kathryn for review. No shortcuts. No skipping steps. Every confirmation gate requires explicit approval before proceeding.


STEP 1 — DEFINE THE EVALUATION TARGET (WHAT "GOOD" MEANS)

In this step we set the "north star" so the critique is not generic. We define the website's job, audience, offer, and success criteria. Without this, every later score is guesswork.

1.1 Context Questions (answer what you can; fill gaps responsibly)

  1. Website URL to evaluate (or file path if local HTML)
  2. Organization type (B2B/B2C, nonprofit, church, SaaS, consulting, etc.)
  3. Primary goal (choose 1–2): lead gen, booked calls, ecommerce sales, donations, newsletter, app installs, recruiting, credibility, other
  4. Primary audience (title/role + what they're trying to solve)
  5. Offer summary (what you sell + starting price range if relevant)
  6. Primary call-to-action (CTA) you want most visitors to take
  7. Differentiators (why choose you vs alternatives)
  8. Customer skepticism / objections you typically face
  9. Traffic sources (rough split): organic, paid, referrals, social, email, direct
  10. Constraints: brand tone, legal/compliance, "must keep" elements, or "do not change" items
  11. 3 examples of "great" websites you admire (optional)
  12. Your definition of "brutally honest" (e.g., "assume I'm wasting money unless proven otherwise")

1.2 Action

Produce a 1-page EVALUATION BRIEF including:

1.3 Output

1.4 Confirmation Gate

Ask: "Approve STEP 1? (Yes/Edits)"


STEP 2 — ASSEMBLE THE VIRTUAL EXPERT PANEL + SCORECARD

In this step we define the "multi-expert team" and the rubric so the evaluation is consistent, defensible, and actionable.

2.1 Context Questions

  1. Are we optimizing primarily for: speed to conversion, premium positioning, maximum volume, donor trust, or recruitment?
  2. Any industries or competitors to benchmark against?
  3. Any technical platform known (WordPress, Webflow, Squarespace, custom) — optional.

2.2 AI-Defined Expert Panel (default; will adjust to context)

2.3 Scorecard Structure

8–10 categories, each with:

2.4 Output

2.5 Confirmation Gate

Ask: "Approve STEP 2? (Yes/Edits)"


STEP 3 — RUN THE WEBSITE EVALUATION (PAGE-BY-PAGE + FUNNEL PATH)

In this step we apply the rubric to the actual site experience, from first impression through conversion. We document evidence, not vibes.

3.1 Context Questions

  1. Which pages matter most? (pick up to 5): Home, Services/Product, Pricing, About, Case Studies, Blog, Donate, Contact/Book, Landing page, Other
  2. Primary conversion path (example: Home → Services → Book a Call)
  3. Any known analytics outcomes (conversion rate, bounce, booked calls/month) — optional

3.2 Evaluation Procedure

3.3 Output

3.4 Confirmation Gate

Ask: "Approve STEP 3? (Yes/Edits)"


STEP 4 — PRESCRIPTION: PRIORITIZED IMPROVEMENT PLAN + COPY/STRUCTURE FIXES

In this step we convert critique into an execution plan. Not "consider improving X," but exactly what to change, where, and why.

4.1 Context Questions

  1. Your capacity: do you want "quick wins only" or "full rebuild plan"?
  2. Who will implement changes (you, contractor, agency) and any time constraint?
  3. Any brand voice constraints (e.g., restrained, board-safe, faith-forward, etc.)?

4.2 AI Outputs (delivered as concrete artifacts)

  1. PRIORITIZED FIX BACKLOG — Each item includes: problem, why it matters, specific change, suggested copy/sections, effort (S/M/L), impact (Low/Med/High), owner type (copy/design/dev)
  2. HOMEPAGE WIREFRAME OUTLINE (section-by-section order)
  3. ABOVE-THE-FOLD REWRITE OPTIONS (3 variants: direct, premium, warm-credible)
  4. PROOF STRATEGY (what proof is missing + how to present it)
  5. CTA PATH recommendation (reduce to 1 dominant CTA + 1 secondary)

4.3 Confirmation Gate

Ask: "Approve STEP 4? (Yes/Edits)"


STEP 5 — BENCHMARK EXAMPLES: WEBSITES THAT "GOT IT RIGHT" (WITH REASONS)

In this step we ground recommendations in external examples so you can show patterns to designers or stakeholders.

5.1 Context Questions

  1. What style of examples do you want? (choose): minimalist premium, direct-response, enterprise trust, nonprofit donor trust, agency/consulting, church/ministry
  2. Any industries to avoid or prefer?
  3. Are you okay with examples outside your sector if the pattern is transferable?

5.2 AI Output

8–12 benchmark examples grouped into 3–4 "patterns," such as:

For each example:

5.3 Confirmation Gate

Ask: "Approve STEP 5? (Yes/Edits)"


STEP 6 — FINAL CONSOLIDATION: BOARD-READY WEBSITE EVALUATION REPORT

In this final step we consolidate everything into one decisive deliverable.

6.1 AI Consolidation Output (single report)

6.2 Confirmation Gate

Ask: "Approve FINAL REPORT? (Yes/Edits)"


START HERE

Reply with your STEP 1 answers (at minimum: URL/file path, org type, primary goal, audience, offer, primary CTA).