Interview Design

Building an Engineering Interview Question Bank You'll Actually Use

ClarityHire Team(Editorial)2 min read

Why most question banks die

A team writes 30 great questions. Six months later, half are leaked on Glassdoor, a quarter are deprecated because the language stack changed, and the remaining few get reused for every candidate until they too leak. The bank shrinks faster than it grows.

The fix is treating the bank like a software artifact: versioned, instrumented, and pruned.

Structure that survives

Each question entry should carry:

  • Stable ID and version. When you tweak wording, bump the version. When you replace it, deprecate the ID.
  • Skill it measures. Not "JavaScript" — "can debug an async race condition without tooling."
  • Difficulty calibration. Junior / mid / senior, with anchored rubric notes.
  • Rubric. What level 1, 2, 3, 4 performance looks like. Concrete behaviors, not adjectives.
  • Estimated time. With variance. "30–45 minutes" not "should take about 30."
  • Leak risk. Updated quarterly based on the leak-detection process below.
  • Last used. So you can rotate.

Leak detection

Search the question text quarterly on Glassdoor, Reddit, Blind, Leetcode discussion threads, and any niche forums for the role. If you find it: retire, do not edit. An "edited" leaked question is still a leaked question.

For high-stakes roles, run the question text through a few candidate-facing AI assistants and see if they produce a polished answer. If they do, the question's ceiling is now the ceiling of those assistants. Either redesign so the assistant's answer is a starting point rather than the answer, or retire it.

The right number of questions

Less than people think. For a typical engineering loop, you need roughly:

  • 4–6 active screen-stage coding questions
  • 3–4 active take-home prompts
  • 6–10 active onsite-stage questions across system design, debugging, and behavioral

Smaller bank, deeper rubrics, more rigorous calibration. A 100-question bank that nobody calibrates is worse than a 10-question bank everyone has scored against.

The pattern with assessments

For technical screens, the bank lives in your assessment platform — not in a spreadsheet. ClarityHire stores question metadata, rubric, and per-candidate response data together so retiring a question and analyzing its score distribution is one query, not a manual audit.

A good question bank is a flywheel. Every interview you run gives you data on whether the question discriminates between strong and weak candidates. If it doesn't, retire it — even if it sounds like a good question.

question bankinterview questionsengineering hiringversioning

Related Articles