Interview Scorecard Design Kit — Start Here
What This Kit Does
This kit produces custom interview scorecards for recruiting engagements. A scorecard is the structured evaluation tool that interviewers use to assess candidates — what they're evaluating, how they score it, and what evidence they document. The scorecard drives consistency across interviewers, creates defensible documentation, and feeds the debrief where hiring decisions are made.
Every scorecard built through this kit follows the same production path: extraction interview → gap identification → advisor sign-off on gaps → build → QC → client review → deployment to interview team.
The Standard Path
The primary input for any scorecard is an extraction interview with the practitioner who runs the search — not a template from a job board or an HR software default. The consultant interviews the recruiting lead to capture how they design evaluations, what competency domains matter for the role, how interviewers are paired and assigned focus areas, and what the scoring methodology looks like in practice.
A pre-existing scorecard template (from a prior search, from the client's HR team, from a recruiting platform) is a supplementary input, not a replacement for the extraction interview. Templates capture structure; extraction captures methodology. When a template exists, it surfaces the format and the categories. The extraction interview fills in how focus areas are determined, how questions are developed, how interviewers are prepared, and what makes a scorecard actually work in the debrief.
Never build a scorecard from a template alone. If a template arrives without an extraction interview, treat every design decision in the template as unvalidated and flag each one as a gap.
Two Process Tracks
| Track | Description | When It Applies |
|---|---|---|
| Consultant Process | The practitioner works with the client to design the scorecard — determining focus areas, mapping them to interviewers, developing behavior-based questions, and preparing the interview team. | Every scorecard build. This is the design methodology. |
| Agent Process | AI assists in specific steps — generating behavior-based questions from competency domains, structuring the template from extraction notes, analyzing completed scorecards, summarizing debriefs. | When the practitioner uses AI tools as part of their workflow. Supplements, never replaces, the consultant process. |
Both tracks operate together. The consultant process determines what goes in the scorecard. The agent process accelerates specific production steps.
What It Produces
Primary deliverable: A client-specific interview scorecard customized to the role, the organization, and the interview team. The scorecard includes evaluation criteria, a scoring scale, behavior-based questions organized by focus area, and a recommendation framework.
Secondary deliverables:
- Focus area assignments mapped to interviewers
- Sample behavior-based questions per focus area
- Interviewer preparation materials (what the scorecard is, how to use it, what good evaluation looks like)
- Scoring summary template for debrief facilitation
File naming: [client]-scorecard-[role-slug]-v[n]-[mon]-[yyyy].[ext]. Format depends on client preference and deployment method.
What This Kit Does Not Do
- Define the role. The scorecard evaluates candidates for a role that has already been defined. If the role hasn't been validated, the job description hasn't been written, and the must-haves haven't been established, the scorecard cannot be designed. Those are upstream deliverables in the recruiting process.
- Replace the extraction interview. The consultant methodology file explains what to ask and how. The scorecard cannot be designed without understanding how the practitioner actually evaluates candidates.
- Determine who should be on the interview team. Team composition, pairing logic, and decision-maker designation come from the recruiting process — specifically the kickoff meeting. The scorecard kit takes the interview team as an input and designs the evaluation tool they'll use.
- Make hiring decisions. The scorecard structures the evaluation. The debrief surfaces the discussion. The decision makers decide. The scorecard is an instrument, not a verdict.
The Gap Protocol
The gap protocol is the most important rule in this kit.
A gap is any required piece of content that is not present in the source material. Common gaps: focus areas not defined, scoring scale not established, interview team composition unknown, role must-haves not captured, behavior-based question methodology not confirmed.
The rule: Gaps are flagged — never filled. When a required input is missing, the build stops. A gap report is produced and reviewed by the advisor. The advisor decides how to fill the gap — through follow-up with the client, a targeted extraction session, or a documented decision. Only after every gap is resolved does the build proceed.
Filling a gap without advisor sign-off produces a scorecard with invented evaluation criteria. Invented criteria lead to inconsistent evaluations, indefensible hiring decisions, and interview teams who don't trust the tool they've been given.
File Inventory
| File | Purpose | When to Use |
|---|---|---|
00-start-here.md | Orientation — two tracks, standard path, gap protocol | Start here every time |
01-context.md | Required inputs, gap identification protocol, what each section needs | Before every build — identify gaps before opening the skill |
02-terminology.md | Locked vocabulary for this kit | Reference when writing or reviewing any scorecard |
03a-golden-example-consultant.md | Golden example — consultant-designed scorecard benchmark | Study before designing any scorecard |
03b-golden-example-agent.md | Golden example — AI-assisted scorecard workflow benchmark | Study before using AI tools in the scorecard design process |
04-quality.md | QC checklists — design integrity + legal defensibility + usability | Run after every build and after every revision |
05-build-skill.md | Build workflow — from source analysis through delivery | Follow step by step for every build |
06-consultant-methodology.md | Extraction interview guide — what to ask, how to capture, what must be confirmed | Before every extraction session |
07-process-agent.md | AI-assisted workflow — what AI can do, what it cannot, specific workflow steps | Reference when AI tools are part of the production workflow |
Relationship to Other Kits
Recruiting Process SOP: The recruiting process SOP documents the full search lifecycle. The scorecard design is one stage within that process — it sits between the kickoff meeting and the team interviews. The SOP references the scorecard; this kit defines how to build one.
Job Description Optimization Kit: The job description and position profile are upstream inputs to the scorecard. The must-haves, nice-to-haves, and competency requirements defined in the job description flow directly into the scorecard's focus areas. The scorecard cannot be designed until the role is defined.
Candidate Experience Journey Kit: The scorecard is part of the candidate's experience — it determines what they're being asked and how they're being evaluated. The interview preparation materials that candidates receive (presentation instructions, focus areas disclosed to them, the structure of their interviews) must be consistent with what the scorecard measures.
Debrief Facilitation Kit: The scorecard feeds the debrief. How the scorecard is structured determines what information is available for the debrief discussion, how recommendations are aggregated, and what evidence interviewers bring to the table. A well-designed scorecard makes facilitation possible. A poorly designed one makes it performative.
Client Deployment Kit: Each client engagement has a deployment kit that extends this vault-level kit with client-specific brand, evaluation criteria, role context, and interview team details. Always use the deployment kit alongside this vault kit. The vault kit defines the universal methodology; the deployment kit defines how it applies to this specific client and role.
Gold Standard References
Golden examples for this kit will be drawn from the first completed client deployment that passes full QC and is used in a live search. Until that deployment exists, the golden example files contain structural specifications and placeholder notes. The methodology files (consultant methodology, build skill, quality) are complete and production-ready.