Designing a Frontend Developer Coding Test That Reflects the Actual Job
What frontend roles actually require
Most frontend work is not algorithms. It is:
- Reading an unfamiliar component tree and finding where state lives
- Wiring an API response into a UI without breaking edge cases (loading, error, empty)
- Writing CSS that survives content longer than the designer mocked
- Recognising when a re-render is the cause of a perf bug
- Knowing when to add a dependency and when not to
A LeetCode reverse-a-binary-tree question filters for none of this. Worse, it filters out candidates who are excellent at the actual job and not interested in algorithmic puzzles.
A 90-minute test that measures the real thing
Give the candidate a small, broken React app with three issues:
- A subtle bug. A list re-renders all rows on a single change because the key prop is the array index. The list is laggy with >100 items but not obviously broken.
- An incomplete feature. A form that posts but doesn't handle the loading or error state.
- A styling problem. A card layout that breaks when the title is longer than 40 characters.
Ask them to fix all three. Provide the running app, the codebase, and the freedom to add libraries (or not).
This measures real skill: reading unfamiliar code, recognising patterns, judgment about when to add dependencies, taste in CSS, completeness about edge cases.
Rubric
Score four dimensions, 1–4 each, anchored:
- Bug diagnosis. Did they identify the cause before fixing? Or did they patch a symptom?
- Edge-case completeness. Loading, error, empty — did they cover them without prompting?
- Code quality. Naming, structure, dependency choices.
- Communication. Did they leave comments or a short note explaining trade-offs?
Senior candidates routinely score 3–4 across all four dimensions. The test does not need to be hard to discriminate well — it needs to be real.
How to administer it without it leaking
- Rotate between 3–4 broken-app variants.
- Pin candidates to a randomly assigned variant.
- Use ClarityHire's keystroke and code coherence integrity signals so a candidate who pasted a fix from elsewhere is flagged for the reviewer to probe in the follow-up call.
- Always pair the test with a 30-minute follow-up where the candidate walks through their changes. If they cannot explain their own diff, score drops accordingly.
What never to do
- 4-hour take-homes. You will lose your best candidates to companies that respect their time.
- Open-ended "build a clone of X." Variance is too high; rubrics break.
- Tests that require setting up a local environment from scratch. Use a hosted IDE so setup time is zero.
The right frontend test takes 90 minutes, mirrors a Tuesday-morning ticket, and produces a rubric score you can defend in a debrief.