A live-coding interview, in practice, tends to measure two things pretty reliably: how well the candidate keeps their nerves in check, and how recently they've rehearsed the kind of algorithm patterns the problem is likely to draw on. What it often struggles to surface is how someone actually reasons about a problem, because the format leaves very little room for reasoning. It gives the candidate roughly 40 minutes to produce something that compiles while a stranger watches.

Whiteboarding earned its place as an interview default for a real reason. It used to be a rough proxy for "has done computer science", and if you squinted, that correlated with the work. It's harder to defend now for a different reason: the ends of the distribution have stopped being informative. The weaker performances you see in live-coding today often aren't juniors who haven't learned yet. They're experienced engineers who couldn't pull the right algorithm out of their head at 10am on a Tuesday. And the strongest performances often aren't people who solved the problem brilliantly. They're people who happened to have practised the same LeetCode-style question the week before.

What we're actually trying to measure

If you asked any hiring manager to list the things they want to know about a candidate before making an offer, almost none of the items on the list are "can they implement quicksort on a whiteboard." The list usually looks more like:

  • How do they frame an ambiguous problem when you hand it to them cold?
  • What trade-offs do they notice, and which ones do they ignore?
  • When they don't know something, what do they do: freeze, guess, or ask?
  • How do they explain a decision to someone who wasn't in the room?
  • When they're wrong, how do they notice, and how quickly do they back out?

Not a single one of these survives the format of a timed coding pop-quiz. All of them survive — and show up clearly — when the candidate has a few hours, a realistic problem, and the tools they'd actually use on the job.

"But we need to see them code"

You do. But you don't need to see them code in real time, with you staring at them, on a problem they've never seen, in a language they might not have touched in a year. That's not how coding works in your actual company, and it's not how coding works for the candidate either.

What you need is evidence that they can code, and evidence of how they think while doing it. Those are two different questions. The first is usually answered by looking at their repo, a previous project, or the code they produced on a take-home. The second is answered by reading their words: the assumptions they wrote down, the trade-offs they flagged, the questions they asked of an AI collaborator along the way.

That's the shift we think most interview loops benefit from. Instead of watching the code get written in real time, collect the artefacts of the thinking and walk into the follow-up ready to have a real conversation about the work.

What CriticCode does

With CriticCode, you give your candidates a realistic challenge and a handful of structured prompts: assumptions, trade-offs, how they'd test it, whatever the you and your team actually care about. They answer in their own words, with an AI collaborator on the side they can brainstorm with, the same way they'd brainstorm with an AI on their own computer tomorrow. When they submit, you get their answers, the full AI transcript, and a highlight of every piece of text they pasted in and left as is.

You don't get a score. You don't get a ranking. You get a prep artefact: a concrete thing to walk into the follow-up interview with. The interview, in other words, starts from the signal rather than trying to generate it live while the candidate's cortisol is spiking.

CriticCode isn't trying to replace the interview. The hope is to make the one you already run considerably more useful, by ensuring you walk into it with information instead of fishing for it under time pressure.