Build Skill — Scorecard Production Workflow
How to Use This Skill
Follow this workflow in order for every scorecard build. Do not skip steps. Do not jump ahead to building before the gap protocol is complete.
Step 1: Read Reference Data
Before anything else — before reading the source material, before opening the extraction transcript — read the client's reference data file.
The reference data file is the canonical source for:
- Every interviewer's correct name spelling and title
- Every team member's correct name
- Organization name and variations
- Tool names
Every proper noun in the scorecard must match reference data. Reference data wins every time.
Step 2: Read the Position Profile
Read the position profile or job description for the role. Extract:
- Must-have requirements (these will become focus areas or feed into them)
- Nice-to-have requirements (these may inform questions but don't drive focus area design)
- Competency expectations
- Role context within the organization (reporting structure, scope, key relationships)
- Organization's mission, vision, and values (if included; otherwise source separately)
Map each must-have to a potential evaluation area. This becomes the traceability check in QC — every must-have must be evaluable through at least one focus area.
Step 3: Read the Extraction Interview
Routing Check — Run This First
Is an extraction interview available for this practitioner's scorecard methodology?
- Yes → proceed. Read the transcript as the primary source.
- No → stop here. Go to
06-consultant-methodology.mdand conduct the extraction interview first.
Source Material
Read all available source material in this order:
- Extraction interview transcript (primary source)
- Prior scorecard templates, if any (supplementary — captures format, not validated methodology)
- Alignment meeting notes, if the alignment meeting has already occurred
- Kickoff meeting notes (for interview team composition and decision-maker designation)
As you read, note what is explicitly stated vs. what is implied. Only explicitly stated methodology can be used. Implied methodology is a gap.
Step 4: Identify Gaps
After reading all source material, work through the Required Inputs table in 01-context.md. For each required input, determine:
- Present: The source material explicitly provides this information
- Gap: The source material does not provide this information
Common gaps:
- Focus areas not defined for this specific role (only generic competency language exists)
- Scoring scale not specified or not defined per level
- Question development approach not confirmed
- Interviewer assignments not mapped
- Presentation evaluation criteria not defined (even though a presentation is part of the process)
- Submission and debrief protocols not captured
Document every gap. Use the gap report format from 01-context.md.
Step 5: Stop — Present Gap Report to Advisor
Do not proceed to building until every gap is resolved.
Present the gap report to the advisor. For each gap, specify:
- What information is missing
- Which section of the scorecard it affects
- What the advisor needs to find out (and from whom)
Do not suggest answers to fill gaps. Surface them and wait.
When the advisor provides resolutions, record them in the gap report resolution log. Mark each gap RESOLVED before building.
Step 6: Design Focus Areas
This is the core intellectual work of the scorecard build. Focus areas are where methodology meets the specific role.
Focus Area Derivation
Start with the must-have requirements from the position profile. For each must-have, ask:
- What competency domain does this requirement test?
- What would a strong candidate demonstrate in this area?
- What would a weak candidate look like?
- Can this be evaluated through behavior-based questions in an interview setting?
Group related requirements into competency domains. Each domain becomes a candidate focus area.
Focus Area Validation
For each proposed focus area, confirm:
- It traces to at least one must-have requirement
- It can be evaluated through 3-5 behavior-based questions
- It is distinct from other focus areas (no significant overlap)
- It can be meaningfully assessed in the interview time available
- It has a natural owner on the interview team (someone with expertise to evaluate it)
Focus Area Description
For each focus area, write:
- Name — Specific to the role, not generic competency language
- Why this matters — 1-2 sentences connecting the focus area to the organization's needs
- What good looks like — Observable behaviors and evidence indicating strength
- What risk looks like — Observable behaviors and evidence indicating concern
Focus Area Assignment
Map each focus area to a specific interviewer based on:
- Interviewer's professional expertise (executive team members get their functional domain)
- Interviewer's organizational role (board members get strategic and governance areas)
- Balance across interviewers (no one interviewer overloaded, no one underloaded)
- Pairing considerations (paired interviewers should have complementary focus areas)
Confirm that every focus area is assigned. Confirm that every interviewer has at least one focus area.
Step 7: Develop Questions
For each focus area, develop 3-5 behavior-based questions.
Question Structure
Every question should:
- Ask about a specific past experience, not a hypothetical scenario
- Be open-ended (not answerable with yes/no)
- Target the competency being evaluated
- Be answerable by any qualified candidate (not so specific that only one background fits)
- Include 1-2 follow-up prompts to help the interviewer go deeper
Question Review
Before finalizing, check each question against:
- Does it evaluate the focus area it's assigned to? (Not a tangentially related topic)
- Could the answer reveal protected-class information? (Remove immediately if yes)
- Is it distinct from questions in other focus areas? (Avoid evaluating the same competency twice)
- Is it appropriate for the role level? (A C-suite question differs from a mid-level question)
- Would the practitioner ask this question? (Match the practitioner's voice and approach)
Step 8: Design Scoring
Scale Design
Implement the scoring scale captured during extraction. If the practitioner did not specify a scale, use this default and confirm with the advisor:
5-point behavioral scale:
- 5 — Exceptional: Candidate demonstrated this competency at a level that exceeds role requirements. Provided specific, detailed evidence of significant impact.
- 4 — Strong: Candidate demonstrated clear competency with relevant, specific examples. Meets role requirements fully.
- 3 — Adequate: Candidate demonstrated baseline competency. Examples were relevant but lacked depth or specificity.
- 2 — Concerning: Candidate's responses revealed gaps in this area. Examples were limited, vague, or raised questions.
- 1 — Insufficient: Candidate did not demonstrate this competency. Unable to provide relevant examples or provided examples that contradicted role requirements.
Recommendation Framework
Implement the recommendation scale captured during extraction. If not specified, use:
- Strong Yes: This candidate should be advanced. I have high confidence based on specific evidence.
- Yes: This candidate should be considered. Strengths outweigh concerns.
- No: This candidate should not advance. Concerns outweigh strengths.
- Strong No: This candidate should not advance. Significant gaps or risks identified.
Justification Requirements
Every score — both section-level and overall recommendation — requires written justification. The justification prompt should specify:
- Reference specific candidate statements, behaviors, or demonstrated competencies
- Do not rely on impressions, feelings, or general characterizations
- If you cannot articulate why you gave a score, reconsider the score
Step 9: Build the Scorecard Document
Assemble the scorecard in the format appropriate for the deployment method. Include:
- Header (role, organization, version, confidentiality)
- Interviewer information fields
- Presentation evaluation section (if applicable)
- Mission/values/must-haves alignment section
- Focus area evaluation sections (one per assigned focus area, with questions pre-populated)
- Overall recommendation section
- Additional notes field
- Submission instructions (deadline, method, recipient, no-cross-visibility rule)
Step 10: Build Interviewer Preparation Materials
Alongside the scorecard, produce:
- Focus area assignment summary (who evaluates what)
- Brief guide on behavior-based interviewing (what it is, how to listen, what to document)
- Scoring scale with definitions
- Submission instructions and deadline
- Reminder about consistency (same questions for every candidate)
- Contact information for questions
These materials are presented at the alignment meeting and included in the interviewer package.
Step 11: Run Gate 2 QC
After completing the build, run the Gate 2 checklist from 04-quality.md.
Run the checks in order:
- Design integrity (first — any traceability failure is blocking)
- Legal defensibility
- Content accuracy
- Usability
- Presentation section (if applicable)
- Debrief readiness
Fix every blocking failure before proceeding. After fixing, re-run the full Gate 2 checklist.
Step 12: Deliver for Advisor Review
After Gate 2 passes:
- Save the file to the correct output location per the deployment kit
- Note any warnings from QC
- Present to advisor for review
The scorecard does not go to the client or the interview team until the advisor reviews and approves.
When Building a Revision
When new information surfaces (role requirements change, interview team changes, alignment meeting produces adjustments):
- Read the current scorecard version
- Identify specifically what changed — focus areas, questions, assignments, scoring
- Update only the changed sections
- Run Gate 2 QC on the full scorecard after updating
- Increment the version number
Do not use a revision as an opportunity to redesign sections that weren't changed.