Somewhere between the whiteboard era and the weekend-project era, the industry drifted on take-homes. The original reasoning was sound: "let's give candidates a realistic task they can do at their own pace." The format that grew out of it in a lot of places, though, is one that asks for eight hours of work from someone who hasn't been offered the job yet, and that the company's own engineers then review in fifteen minutes.
That exchange is lopsided. Here's the shape of a take-home we think holds up.
The one-evening rule
A take-home is usually more useful when it's scoped so that a competent working engineer can finish a substantive version of it in ninety minutes. Two hours at the outside. Much beyond that, the exercise starts to measure something other than engineering judgement — it starts to measure how much unstructured time the candidate has to spare.
That unstructured time isn't distributed evenly. Longer take-homes tend to favour candidates who happen to have capacity for them at the moment they apply: candidates without a demanding current job, candidates without primary caregiving responsibilities, candidates who aren't already running three other loops in parallel. None of those characteristics is a proxy for engineering judgement, which is the thing the exercise was meant to surface in the first place.
If a take-home currently runs six hours and there's no obvious way to trim it, what's usually going on underneath is that it's trying to do the work of two or three separate interview rounds at once. Splitting those jobs back apart (a short structured exercise, a conversation about it, and maybe a system-design round) tends to produce cleaner signal than leaving them fused into one long weekend task.
What to actually ask for
The single best predictor of whether a take-home will produce useful signal is whether the candidate has room to show you why they made choices, not just what they built.
Bad take-home: "Build a working URL shortener with auth and a dashboard." You will get back a half-finished URL shortener that does almost exactly what three hundred other candidates have submitted. You will learn almost nothing.
Good take-home: "Here is a URL shortener spec. Pick three decisions you would make differently from the spec and explain why. Then implement the piece you think is most interesting." You will get back a short written argument, a bit of code, and a clear view of whether the candidate thinks like an engineer or a code-completion tool.
The unit of signal is a decision the candidate made and can defend. Ask for those explicitly.
Pay, or shorten
If the take-home is substantial enough that a candidate could reasonably expect to be paid for it, the simplest thing is to pay them. A flat fee, paid whether they pass or fail, no invoicing dance — a gift card on submission is usually plenty. Teams who do this tend to get more complete work back, and over time they quietly build a reputation for respecting candidates' time.
If paying isn't an option, the alternative is to shorten the take-home until the exchange feels fair without payment. We've not really seen a middle path that holds up over a lot of candidates.
Read it like you'd read a PR
One reviewing pattern we see often is that engineers end up reading a take-home the way they'd grade a LeetCode solution: pass/fail, correctness-first, small syntactic issues flagged as if this were a rubric. That's not usually how you'd review a teammate's PR on a Tuesday afternoon, and it's probably not the best read on a take-home either.
A lighter approach: read the README first. Read the commit messages if there are any. Read the trade-off notes. Then skim the code for the one or two places where the candidate clearly thought about something, and anchor the follow-up interview on those. A take-home tends to be more useful as an artefact you walk into a conversation with than as a score you submit to a tracker.
One alternative worth considering
If you've read this far and found yourself nodding, a lot of the current take-home genre is really trying to do the job of a short structured prompt plus a conversation. That's the shape CriticCode is built around: the candidate writes through the problem in their own words, with an AI collaborator they can brainstorm with, and the interviewer walks in with the full artefact (answers, chat, pastes) to drive the follow-up. Similar signal, ninety minutes, no weekend.
Whichever route you take: if the current take-home runs eight hours, an experiment worth trying is cutting it to two. Most teams who do this tell us they learn more from the shorter version, not less.