Assessment Design

Open-Book Coding Assessments: Designing for the AI Era

ClarityHire Team(Editorial)2 min read

The closed-book era is over

Pretending candidates do not have ChatGPT, Stack Overflow, GitHub Copilot, and the documentation for every tool they use is institutional self-deception. The serious question is no longer "should we let them use these?" — it is "what assessment produces signal when we do?"

What "open-book done well" looks like

Open-book does not mean "easier." Done well, it means:

  • The problem is harder than a closed-book version, because the candidate has tools.
  • The grading focuses on judgment, not recall.
  • The followup is mandatory, because submission alone is no longer sufficient evidence of authorship.

Three open-book formats that produce signal

1. The "build something realistic in 90 minutes"

The candidate has full internet, full AI, full docs. The task is realistic and slightly larger than they could do unaided in the time. Grading focuses on: did they make the right architectural choices, did they prioritize correctly, did the testing approach make sense.

This format rewards experienced engineers who know what to build over fast typers who can grind a closed-book LeetCode.

2. The "improve this codebase" exercise

Candidate gets a working but flawed codebase. Task: identify and fix the three biggest issues, with a written explanation. Tools allowed: anything.

A senior engineer with AI is much faster than a junior one with AI. The differentiator becomes which problems they choose to fix.

3. The "design + prototype" task

A small system design problem (40 min) followed by a 50-min prototype of the most interesting piece. Open everything. Grading focuses on the design discussion and the choices made in the prototype.

How to grade open-book fairly

The grading rubric must change. Drop "wrote idiomatic code" — Copilot does that for free. Add:

  • "Made appropriate architectural choices given constraints"
  • "Identified the right things to build first"
  • "Caught the non-obvious edge case"
  • "Used tools effectively and transparently"

The mandatory follow-up

A 30–45 minute live discussion of the submission. The candidate walks through their code, explains choices, takes a small extension request. This is where authorship is confirmed.

Without the follow-up, open-book is unscoreable. With it, open-book gives you better signal than any closed-book interview ever did.

open bookai assistedmodern interviewsassessment design

Related Articles