Most interview processes in 2026 don't have a position on AI. They have a paragraph. Somewhere in the candidate-facing brief there's a line that says something like "please don't use AI for this exercise" or "you may use AI where helpful" — written a year or two ago, not really enforced, not really abandoned. Candidates read it, try to guess what the team actually wants, and choose their behaviour based on the guess.

You can feel the shape of this in the submissions that come back. Some candidates carefully hide their AI use. Some mention it once and leave it ambiguous. Some submit polished artefacts that were obviously AI-assisted without saying so. What unites them is that almost nobody is telling you the full story of how they worked, because the process didn't make clear what story you wanted to hear.

The cost of not having a position

A policy says what's allowed. A position says what you believe. Policies are easy to write and easy to ignore. Positions change how the interview is scored.

Teams that haven't taken a position on AI in job interviews tend to produce a certain kind of noise. The scoring starts to depend on whichever reviewer happens to read the submission — one treats AI use as cheating, the next treats it as resourcefulness, and the candidate's fate quietly becomes a function of the draw. Candidates who would have been honest about their workflow get filtered out in favour of candidates who are good at performing whichever version the team seems to want. And the debrief stops being a conversation about the candidate; it becomes a conversation about what was "fair." Once that happens, the hiring decision has already slipped out of calibration.

None of this is really about AI. It's about the fact that an interview process without a clear stance can't produce a clear signal.

A ban that actually works

"No AI" is a defensible position. It's not a very useful one in a take-home, for reasons I've written about before: the rule can't be enforced, and in practice it filters for candidates willing to pretend rather than candidates who can't. But it can work in a different shape.

A ban that works looks like a live, supervised round. The candidate sits in a call with an interviewer, shares their screen, and works through a problem while the interviewer watches. No external tabs, no second monitor, no mystery. The trade-off is that you're now measuring performance under surveillance, which isn't much like the job. You're also generating a weaker read on how the candidate thinks, because they're spending energy managing the format rather than the problem. That's a known cost of the shape — not a bug in your process.

If you want to ban AI in the async parts of your process, the honest version is: don't have async parts. The round where you watch is the round where the ban holds.

AI in the room, on purpose

The other side of the position is: treat AI as part of the environment the candidate works in, because it is. The question shifts from did they use AI to how did they use it, and was the reasoning theirs? That's a harder question to answer, but it's the one the job actually asks.

In practice, that looks like giving the candidate a realistic problem and explicit permission to work with AI, then asking them to submit the conversation along with the artefact. Read the conversation first. You'll see, within ten minutes, which candidates were steering the tool and which were being steered by it. You'll also see the candidates who can explain why they overrode the model's suggestion — which is often the clearest read on engineering judgement you'll get before the live round.

At CriticCode this happens by default. The candidate works alongside an AI collaborator in the page, the transcript comes back with the submission, and the follow-up starts from the actual artefacts of the thinking rather than from a score. The shape works without CriticCode too, once you've committed to reading the transcript as part of the round.

How to decide

The test for whether you've taken a position is simple. Describe your approach to AI for interviews in one sentence, as if you were writing it on the candidate brief. If the sentence hedges — "AI may be used where appropriate" — you haven't taken one. Commit to a shape, communicate it clearly, score for it consistently. The candidates on the other side of the table have already decided what they think about AI. The least your process can do is the same.