← Vault Index
Source: frameworks/kit-interview-scorecard-design/00-start-here.md

Interview Scorecard Design Kit — Start Here

What This Kit Does

This kit produces custom interview scorecards for recruiting engagements. A scorecard is the structured evaluation tool that interviewers use to assess candidates — what they're evaluating, how they score it, and what evidence they document. The scorecard drives consistency across interviewers, creates defensible documentation, and feeds the debrief where hiring decisions are made.

Every scorecard built through this kit follows the same production path: extraction interview → gap identification → advisor sign-off on gaps → build → QC → client review → deployment to interview team.

The Standard Path

The primary input for any scorecard is an extraction interview with the practitioner who runs the search — not a template from a job board or an HR software default. The consultant interviews the recruiting lead to capture how they design evaluations, what competency domains matter for the role, how interviewers are paired and assigned focus areas, and what the scoring methodology looks like in practice.

A pre-existing scorecard template (from a prior search, from the client's HR team, from a recruiting platform) is a supplementary input, not a replacement for the extraction interview. Templates capture structure; extraction captures methodology. When a template exists, it surfaces the format and the categories. The extraction interview fills in how focus areas are determined, how questions are developed, how interviewers are prepared, and what makes a scorecard actually work in the debrief.

Never build a scorecard from a template alone. If a template arrives without an extraction interview, treat every design decision in the template as unvalidated and flag each one as a gap.

Two Process Tracks

TrackDescriptionWhen It Applies
Consultant ProcessThe practitioner works with the client to design the scorecard — determining focus areas, mapping them to interviewers, developing behavior-based questions, and preparing the interview team.Every scorecard build. This is the design methodology.
Agent ProcessAI assists in specific steps — generating behavior-based questions from competency domains, structuring the template from extraction notes, analyzing completed scorecards, summarizing debriefs.When the practitioner uses AI tools as part of their workflow. Supplements, never replaces, the consultant process.

Both tracks operate together. The consultant process determines what goes in the scorecard. The agent process accelerates specific production steps.

What It Produces

Primary deliverable: A client-specific interview scorecard customized to the role, the organization, and the interview team. The scorecard includes evaluation criteria, a scoring scale, behavior-based questions organized by focus area, and a recommendation framework.

Secondary deliverables:

File naming: [client]-scorecard-[role-slug]-v[n]-[mon]-[yyyy].[ext]. Format depends on client preference and deployment method.

What This Kit Does Not Do

The Gap Protocol

The gap protocol is the most important rule in this kit.

A gap is any required piece of content that is not present in the source material. Common gaps: focus areas not defined, scoring scale not established, interview team composition unknown, role must-haves not captured, behavior-based question methodology not confirmed.

The rule: Gaps are flagged — never filled. When a required input is missing, the build stops. A gap report is produced and reviewed by the advisor. The advisor decides how to fill the gap — through follow-up with the client, a targeted extraction session, or a documented decision. Only after every gap is resolved does the build proceed.

Filling a gap without advisor sign-off produces a scorecard with invented evaluation criteria. Invented criteria lead to inconsistent evaluations, indefensible hiring decisions, and interview teams who don't trust the tool they've been given.

File Inventory

FilePurposeWhen to Use
00-start-here.mdOrientation — two tracks, standard path, gap protocolStart here every time
01-context.mdRequired inputs, gap identification protocol, what each section needsBefore every build — identify gaps before opening the skill
02-terminology.mdLocked vocabulary for this kitReference when writing or reviewing any scorecard
03a-golden-example-consultant.mdGolden example — consultant-designed scorecard benchmarkStudy before designing any scorecard
03b-golden-example-agent.mdGolden example — AI-assisted scorecard workflow benchmarkStudy before using AI tools in the scorecard design process
04-quality.mdQC checklists — design integrity + legal defensibility + usabilityRun after every build and after every revision
05-build-skill.mdBuild workflow — from source analysis through deliveryFollow step by step for every build
06-consultant-methodology.mdExtraction interview guide — what to ask, how to capture, what must be confirmedBefore every extraction session
07-process-agent.mdAI-assisted workflow — what AI can do, what it cannot, specific workflow stepsReference when AI tools are part of the production workflow

Relationship to Other Kits

Recruiting Process SOP: The recruiting process SOP documents the full search lifecycle. The scorecard design is one stage within that process — it sits between the kickoff meeting and the team interviews. The SOP references the scorecard; this kit defines how to build one.

Job Description Optimization Kit: The job description and position profile are upstream inputs to the scorecard. The must-haves, nice-to-haves, and competency requirements defined in the job description flow directly into the scorecard's focus areas. The scorecard cannot be designed until the role is defined.

Candidate Experience Journey Kit: The scorecard is part of the candidate's experience — it determines what they're being asked and how they're being evaluated. The interview preparation materials that candidates receive (presentation instructions, focus areas disclosed to them, the structure of their interviews) must be consistent with what the scorecard measures.

Debrief Facilitation Kit: The scorecard feeds the debrief. How the scorecard is structured determines what information is available for the debrief discussion, how recommendations are aggregated, and what evidence interviewers bring to the table. A well-designed scorecard makes facilitation possible. A poorly designed one makes it performative.

Client Deployment Kit: Each client engagement has a deployment kit that extends this vault-level kit with client-specific brand, evaluation criteria, role context, and interview team details. Always use the deployment kit alongside this vault kit. The vault kit defines the universal methodology; the deployment kit defines how it applies to this specific client and role.

Gold Standard References

Golden examples for this kit will be drawn from the first completed client deployment that passes full QC and is used in a live search. Until that deployment exists, the golden example files contain structural specifications and placeholder notes. The methodology files (consultant methodology, build skill, quality) are complete and production-ready.