There's a real logic behind the "no AI use during this assessment" clause. Back when AI tools were new, you wanted the candidate's own code, not a chatbot's output with their name on it, and the rule caught that cleanly. It's a reasonable instinct, and it's still the default on most take-homes in 2026. The awkward part is that the premise underneath it has quietly shifted.

Almost every engineer you interview this year has already had AI help with some of their recent work. Most of the strong ones will tell you that openly if the process makes space for it. What the clause tends to filter for now isn't unassisted coders — it's candidates who are willing to either pretend the tool didn't exist or step away from the process. Both of those are weaker signals than the one the rule was originally trying to get.

What the rule ends up selecting for

We usually see candidates respond to a no-AI clause in three different ways.

The first group reads it, shrugs, and uses AI anyway. They aren't worse engineers than anyone else. The rule just reads as performative and they treat it that way. The unintended consequence is that the process starts quietly selecting against the candidates who'd have been honest about their workflow.

The second group takes the clause seriously and produces a take-home that's noticeably weaker than the work they'd do on day one of the job. The skill being tested (coding unassisted) is one they'll almost never be asked to demonstrate again once they're hired, so the signal being gathered has limited predictive value for on-the-job performance.

The third group reads the clause and self-selects out of the pipeline. These are often the experienced engineers who already use AI daily and read the premise as a sign that the team hasn't updated its thinking about the craft. Losing those candidates at the door is the quietest failure mode, because you never see the applications you didn't get.

None of the three groups is really answering the question the rule set out to ask.

The question behind the question

The worry that produced the no-AI clause in the first place is usually one of these:

  • Can this person code at all, or did a model produce everything?
  • Can they think for themselves, or do they paste the model's output without understanding it?
  • Are they going to be useful on day one, or will they hit a wall the first time AI isn't enough?

Those are real concerns, and they're worth answering. What we've found is that banning the tool doesn't really answer any of them. What does answer them is watching what the candidate does with the tool: where they push back on its output, where they notice it's wrong, what they choose to ignore, how they phrase their questions, and whether the artefact they submit shows a working mental model rather than a transcript they didn't fully understand.

That's harder to assess than a closed-book coding test. It's also a closer reflection of the actual job.

What seems to work instead

Treat AI as part of the evaluation rather than the enemy of it. Give the candidate a realistic problem and a working AI collaborator, tell them you'd like to see the conversation, and then actually read the conversation. You find out quickly which candidates are steering the tool and which are being steered by it.

If you're using CriticCode this happens by default: the candidate chats with an in-page AI while they work through your prompts, and the transcript lands alongside their answers when you open the submission. The same pattern works without CriticCode too — the tool just removes the friction of collecting the transcript.

The industry took roughly a decade to retire "don't use Google during this interview". The AI version of that rule looks like it's on a similar trajectory, and the candidates on the other side of the table are usually ahead of it. If you'd like to see what that looks like in practice, the cheapest experiment is to drop the clause from your next take-home, ask candidates to narrate their AI use instead, and compare what lands on your desk. Most teams who try this tell us the conversations get noticeably richer once the tool is in the room.