Async Technical Interview Questions That Actually Reveal Skill
The async question problem
In 2026, almost any classic algorithm puzzle a candidate sees in an async take-home gets solved by a language model in seconds. So does most of LeetCode. So do the canonical "implement a cache" and "build a URL shortener" questions.
If your async stage is still using questions from the 2020 interview-prep canon, you are not screening for engineering skill — you are screening for ChatGPT access. This post collects four question formats that resist trivial AI completion and produce real signal in an async setting.
Format 1: Read-this-codebase questions
The candidate is given a small repository (50–500 lines) and asked to do something with it: find a bug, add a feature, refactor a class, write a missing test.
Example for a backend role:
"Attached is a small Express API with a
usersendpoint. There is a bug that causes the endpoint to return stale data after a user updates their profile. (1) Find and fix the bug. (2) Add a test that would have caught it. (3) Write 2–3 sentences explaining the underlying cause."
Why it works:
- The candidate must read code, not just write it. AI can write; the candidate still has to understand.
- The "explain the cause" deliverable is short but reveals depth. AI-generated explanations tend to be generic; real engineers point at the specific line.
- A live follow-up question ("walk me through how you found the bug") collapses any AI-only submission.
Format 2: Tradeoff-design questions
Instead of "implement X", ask "design X and write down what you considered."
Example for a senior engineer:
"Design a simple rate limiter for an internal API serving 50 RPS at peak. Implement a working version (any language) and a 1-page README explaining: (1) the algorithm you chose and at least one alternative you rejected, (2) what you would change if traffic grew to 5000 RPS, (3) what you are not handling and why."
Why it works:
- The code is the small part. The README is the signal.
- "What you are not handling" is the killer question. AI is bad at admitting limits; senior engineers do it naturally.
- The deliverable is short to read (≤15 min for the reviewer), unlike a 4-hour project.
Format 3: Debug-this-failure questions
Give the candidate a passing test, a failing test, and the code under test. Ask them to make the failing test pass without breaking the passing one.
This format is hard to game with AI because the AI tends to over-rewrite. A candidate who reasons about minimal changes will outperform a candidate who pastes a regenerated file.
Example:
"Attached is a date-parsing utility. The function correctly handles ISO 8601 dates (test 1 passes). It does not handle dates with mixed-case month abbreviations like
12-Mar-2025(test 2 fails). Make test 2 pass without breaking test 1. Submit the smallest patch you can."
The phrase "smallest patch" is doing the work. Reviewers can read a 3-line diff in 30 seconds.
Format 4: Code-review-style questions
Give the candidate a piece of code with 3–5 issues of varying severity and ask them to file a code review.
Example for a mid-level role:
"Attached is a pull request adding a new
/checkoutendpoint to our e-commerce API. Review it as if it had been submitted by a colleague. List the issues you would block on, the issues you would suggest, and at least one thing the author did well. Order them by severity."
Why it works:
- Engineering is mostly reading other people's code, not writing greenfield code. This question tests the actual job.
- It surfaces judgment ("would you block or just suggest?"), which is the senior-vs-mid signal.
- It is hard to outsource: AI can flag obvious issues but ranks them poorly and rarely catches subtle issues.
What to avoid in async questions
- Pure algorithm puzzles. "Implement a sliding window for X" — solved by AI in seconds, weakly predictive of job performance even before AI.
- Anything over 2 hours. Top candidates skip these. See the drop-off data on time budget.
- Generic CRUD apps. "Build a todo list with Postgres" — every candidate produces a near-identical submission.
- Questions with one right answer. The async format is best at surfacing judgment, which requires multiple valid answers.
Following up live
Every async coding exercise should be paired with a 20-minute live follow-up where the candidate walks through their own submission. This single step does more for integrity than any AI-detection tool. In ClarityHire, the live room loads the candidate's submitted code automatically, so the interviewer arrives prepared.
Pairing questions with rubrics
A great question without a rubric still produces inconsistent results. For each question format above, pre-write a scoring rubric with at least 3 anchored levels (1 = below bar, 3 = at bar, 5 = above bar) before the first submission lands. See our best-practices guide for the full async hiring playbook.