Async Technical Interviews: A Best-Practice Guide
What an async technical interview actually is
An async technical interview is any technical evaluation a candidate completes on their own time, without an interviewer present: take-home coding projects, MCQ tests, recorded coding exercises with a timer, system design write-ups, even short Loom-style walkthroughs of a prior project.
The format has exploded since 2024 because engineering teams discovered the math: a live coding loop costs roughly 4 interviewer-hours per finalist. An async screen costs zero. With AI-assisted candidates flooding top-of-funnel applications, async screens are the only way most teams can keep up. But async is only useful if it produces real signal. Below is the playbook we see top teams use.
1. Match scope to the role
A 90-minute take-home for a contract role is reasonable. The same 90-minute take-home for a junior who applied to twenty other companies this week is not. Match the time investment to the stage:
- First-round screen. 20–45 minutes. Multiple choice, short code, or a single focused exercise. Used to filter the bottom of the funnel.
- Deep skill check. 60–120 minutes, scheduled with a deadline. The candidate has agreed they are serious about the role. Used after a recruiter call.
- Final-round simulation. 2–4 hours, optionally paid. A scoped piece of realistic work — sometimes a small feature on a fake repo. Used only for senior or specialist roles.
If a single async stage takes more than 2 hours, you are losing the candidates you most want.
2. Pick problems that resist AI
A 2025 question that ChatGPT solves in 10 seconds is a question that tells you nothing about the candidate. The async problems that still produce signal share a few properties:
- Context-heavy. The candidate must read existing code or a brief before writing anything. AI can write code, but it cannot read your particular codebase for the candidate.
- Judgment over completion. The deliverable includes written tradeoffs, not just a working solution. "Why did you pick this approach?" is the signal.
- Followed by a live walkthrough. Combine the async exercise with a short live follow-up where the candidate explains their own submission. AI-assisted candidates collapse here.
Detecting AI-generated code is one layer; designing problems that do not reward pure AI use is the better layer.
3. Wire up integrity signals
For coding rounds done in a browser, you can capture signals that tell you whether the candidate behaved like a normal engineer or like someone running a copilot in another window. Things to log:
- Paste events (paste of 200+ characters, especially mid-typing)
- Tab focus losses ("did the editor lose focus 40 times in 45 minutes?")
- Keystroke patterns — humans type code in characteristic bursts; pasted code arrives all at once
- Time-to-first-keystroke after each prompt change
None of these are individually proof of cheating. Together they paint a picture that you can probe in the live follow-up. ClarityHire's async coding sessions include all four signals in the integrity report attached to the submission.
4. Score with a rubric, not vibes
Async submissions get evaluated by whoever has time. That whoever-has-time problem is the single biggest source of inconsistency in async hiring. Fix it by:
- Publishing a written rubric before the first submission lands
- Scoring each dimension separately (correctness, code quality, design judgment, communication)
- Using AI-assist for grading only as a first pass — never as the final word
A consistent 3-out-of-5 from a clear rubric beats an enthusiastic "they crushed it" from a single reviewer who liked them.
5. Respect the candidate's time
The strongest candidates have offers. They evaluate your async stage as a signal of how you'll treat them later. The async stages that win candidates:
- State the time budget up front and stick to it
- Send results within a week ("we'll be in touch" with no SLA is a no-hire signal to senior candidates)
- Reuse the candidate's submission in later rounds rather than starting from scratch
- Pay candidates for any exercise that takes more than 2 hours of real work
How ClarityHire supports async interviews
ClarityHire's async assessments include browser-based coding with full integrity signals, AI-assisted grading on a published rubric, and a one-click "promote this submission to a live follow-up" flow that opens a live coding room already loaded with the candidate's code. The async-to-live handoff is the part most platforms miss.