type: skill name: website-eval description: "Brutally honest, multi-expert website evaluation. 6-step workflow: define target, assemble panel, run evaluation, prescribe fixes, benchmark, consolidate. Every page must pass this before it goes live." status: active
Website Evaluation Skill
AI WORKFLOW: URL-IN → BRUTALLY HONEST, MULTI-EXPERT REVIEW + FIX PLAN
Run this on every page before pushing to Kathryn for review. No shortcuts. No skipping steps. Every confirmation gate requires explicit approval before proceeding.
STEP 1 — DEFINE THE EVALUATION TARGET (WHAT "GOOD" MEANS)
In this step we set the "north star" so the critique is not generic. We define the website's job, audience, offer, and success criteria. Without this, every later score is guesswork.
1.1 Context Questions (answer what you can; fill gaps responsibly)
- Website URL to evaluate (or file path if local HTML)
- Organization type (B2B/B2C, nonprofit, church, SaaS, consulting, etc.)
- Primary goal (choose 1–2): lead gen, booked calls, ecommerce sales, donations, newsletter, app installs, recruiting, credibility, other
- Primary audience (title/role + what they're trying to solve)
- Offer summary (what you sell + starting price range if relevant)
- Primary call-to-action (CTA) you want most visitors to take
- Differentiators (why choose you vs alternatives)
- Customer skepticism / objections you typically face
- Traffic sources (rough split): organic, paid, referrals, social, email, direct
- Constraints: brand tone, legal/compliance, "must keep" elements, or "do not change" items
- 3 examples of "great" websites you admire (optional)
- Your definition of "brutally honest" (e.g., "assume I'm wasting money unless proven otherwise")
1.2 Action
Produce a 1-page EVALUATION BRIEF including:
- Website job-to-be-done
- Target persona
- Key conversion path
- Success metrics
- Non-negotiables
- Evaluation priorities (e.g., conversion over aesthetics, clarity over cleverness)
1.3 Output
- "EVALUATION BRIEF (v1)" (editable)
- "SCORING PRIORITIES" (weights by category)
1.4 Confirmation Gate
Ask: "Approve STEP 1? (Yes/Edits)"
STEP 2 — ASSEMBLE THE VIRTUAL EXPERT PANEL + SCORECARD
In this step we define the "multi-expert team" and the rubric so the evaluation is consistent, defensible, and actionable.
2.1 Context Questions
- Are we optimizing primarily for: speed to conversion, premium positioning, maximum volume, donor trust, or recruitment?
- Any industries or competitors to benchmark against?
- Any technical platform known (WordPress, Webflow, Squarespace, custom) — optional.
2.2 AI-Defined Expert Panel (default; will adjust to context)
- Direct Response / Conversion Strategist (clarity, CTA, funnel, friction)
- Brand Positioning Strategist (message-market fit, differentiation, category cues)
- UX Researcher (information scent, usability, accessibility)
- Copy Chief Editor (above-the-fold, narrative flow, proof, objections)
- SEO Strategist (search intent alignment, technical + on-page basics)
- Performance/Technical Analyst (speed, mobile, core web hygiene, tracking readiness)
- Trust & Risk Reviewer (privacy, claims substantiation, social proof integrity)
- Visual/Design Systems Critic (hierarchy, scannability, consistency)
2.3 Scorecard Structure
8–10 categories, each with:
- Definition of "excellent"
- 5–10 observable criteria (yes/no + quality scale)
- Scoring (0–5) with anchors for 1/3/5
- Evidence notes (what on the page triggered the score)
- Fix effort estimate (S/M/L)
- Impact estimate (Low/Med/High)
- Weighting aligned to STEP 1 priorities
2.4 Output
- "EXPERT PANEL + SCORECARD (v1)"
- "SCORING SCALE DEFINITIONS" (so results don't drift)
2.5 Confirmation Gate
Ask: "Approve STEP 2? (Yes/Edits)"
STEP 3 — RUN THE WEBSITE EVALUATION (PAGE-BY-PAGE + FUNNEL PATH)
In this step we apply the rubric to the actual site experience, from first impression through conversion. We document evidence, not vibes.
3.1 Context Questions
- Which pages matter most? (pick up to 5): Home, Services/Product, Pricing, About, Case Studies, Blog, Donate, Contact/Book, Landing page, Other
- Primary conversion path (example: Home → Services → Book a Call)
- Any known analytics outcomes (conversion rate, bounce, booked calls/month) — optional
3.2 Evaluation Procedure
- First-impression assessment (5 seconds + 30 seconds): Who is this for? What do they do? Why them? What do I do next?
- Above-the-fold teardown (headline, subhead, CTA, proof, visual hierarchy)
- Offer architecture check (packages, outcomes, process, pricing cues, risk reversal)
- Objection handling audit (trust, proof, specificity, "why now," FAQ gaps)
- UX friction scan (navigation, scrolling burden, readability, mobile assumptions)
- Technical hygiene scan (speed cues, mobile friendliness, broken elements, accessibility red flags)
- SEO intent check (title/meta/H1 clarity, topic focus, internal linking basics)
- Compliance/trust check (privacy, terms, claims, testimonials, disclaimers)
3.3 Output
- "SCORECARD RESULTS" (weighted overall score + category scores)
- "EVIDENCE LOG" (specific sections, quotes, UI elements observed)
- "TOP 10 ISSUES" ranked by impact
3.4 Confirmation Gate
Ask: "Approve STEP 3? (Yes/Edits)"
STEP 4 — PRESCRIPTION: PRIORITIZED IMPROVEMENT PLAN + COPY/STRUCTURE FIXES
In this step we convert critique into an execution plan. Not "consider improving X," but exactly what to change, where, and why.
4.1 Context Questions
- Your capacity: do you want "quick wins only" or "full rebuild plan"?
- Who will implement changes (you, contractor, agency) and any time constraint?
- Any brand voice constraints (e.g., restrained, board-safe, faith-forward, etc.)?
4.2 AI Outputs (delivered as concrete artifacts)
- PRIORITIZED FIX BACKLOG — Each item includes: problem, why it matters, specific change, suggested copy/sections, effort (S/M/L), impact (Low/Med/High), owner type (copy/design/dev)
- HOMEPAGE WIREFRAME OUTLINE (section-by-section order)
- ABOVE-THE-FOLD REWRITE OPTIONS (3 variants: direct, premium, warm-credible)
- PROOF STRATEGY (what proof is missing + how to present it)
- CTA PATH recommendation (reduce to 1 dominant CTA + 1 secondary)
4.3 Confirmation Gate
Ask: "Approve STEP 4? (Yes/Edits)"
STEP 5 — BENCHMARK EXAMPLES: WEBSITES THAT "GOT IT RIGHT" (WITH REASONS)
In this step we ground recommendations in external examples so you can show patterns to designers or stakeholders.
5.1 Context Questions
- What style of examples do you want? (choose): minimalist premium, direct-response, enterprise trust, nonprofit donor trust, agency/consulting, church/ministry
- Any industries to avoid or prefer?
- Are you okay with examples outside your sector if the pattern is transferable?
5.2 AI Output
8–12 benchmark examples grouped into 3–4 "patterns," such as:
- Pattern A: Crystal-clear above-the-fold + single CTA
- Pattern B: Proof-forward storytelling
- Pattern C: Simple offers + explicit process
- Pattern D: Premium positioning without fluff
For each example:
- What they do right (specific element)
- How to adapt it to your site (actionable translation)
5.3 Confirmation Gate
Ask: "Approve STEP 5? (Yes/Edits)"
STEP 6 — FINAL CONSOLIDATION: BOARD-READY WEBSITE EVALUATION REPORT
In this final step we consolidate everything into one decisive deliverable.
6.1 AI Consolidation Output (single report)
- Executive Summary (what's broken, what's working, overall score, 3 key moves)
- Evaluation Brief (from STEP 1)
- Scorecard Results (from STEP 3) + key evidence
- Top Issues by Impact (from STEP 3)
- Prioritized Fix Backlog + 30/60/90 day plan (from STEP 4)
- Proposed Homepage Outline + recommended CTA path (from STEP 4)
- Benchmark Patterns + examples (from STEP 5)
- Appendix: Evidence Log + copy variants + notes
6.2 Confirmation Gate
Ask: "Approve FINAL REPORT? (Yes/Edits)"
START HERE
Reply with your STEP 1 answers (at minimum: URL/file path, org type, primary goal, audience, offer, primary CTA).