For years, tech interviews were designed around one core idea:
Can you solve problems on your own?
Now Google appears ready to test something very different:
Can you solve problems effectively with artificial intelligence beside you?
According to internal documents, Google is experimenting with a new interview process that would allow software engineering candidates to use AI assistance during portions of technical interviews. Initially, the company plans to test the format with certain US teams before potentially expanding it globally.
And honestly?
This may become one of the most important shifts in hiring culture since coding interviews themselves became standard.
Because this is not just about interviews.
It is about redefining what companies believe “skill” actually means in the AI era.
Under the proposed system, candidates would use an approved AI assistant — reportedly Google Gemini during the testing phase — while completing exercises involving:
- debugging code,
- understanding existing repositories,
- optimizing systems,
- and validating AI-generated outputs.
Interviewers would reportedly evaluate:
- prompt engineering,
- AI collaboration,
- output verification,
- critical thinking,
- and debugging ability.
That changes the equation completely.
For years, companies primarily tested:
memory,
syntax knowledge,
and raw technical problem-solving.
Now the focus may be shifting toward:
judgment,
verification,
adaptability,
and the ability to work effectively alongside AI systems.
That may sound logical.
But it also raises a fascinating question:
If AI increasingly writes the code…
what exactly separates great engineers from average ones?
That may become one of the defining employment questions of the next decade.
Google says the changes are intended to better reflect modern software development workflows. And in fairness, the industry has already changed dramatically. The company recently stated that a large percentage of new code internally is now AI-generated. Similar claims are emerging across the tech industry as coding agents become increasingly sophisticated.
The uncomfortable reality may be this:
The traditional definition of programming is already changing faster than many universities, hiring systems, and workers are prepared for.
And that may create both enormous opportunity and enormous confusion.
Supporters of AI-assisted interviews argue that the new system mirrors reality. Engineers already use AI tools daily. Preventing candidates from using them during interviews may eventually become as outdated as banning calculators in accounting.
But critics may wonder whether companies are accidentally creating a future where fewer people deeply understand the systems they are building.
Because there is a major difference between:
using AI as a tool,
and
becoming dependent on it.
That distinction matters.
Very few companies want engineers who blindly accept AI-generated answers without understanding:
- why the code works,
- what vulnerabilities exist,
- what security risks are hidden,
- or whether the output is even correct.
And that may be where the real value shifts in the future.
The winners may not necessarily be the people who can generate the most code.
The winners may become the people who can:
- identify mistakes,
- validate outputs,
- ask better questions,
- recognize hallucinations,
- debug efficiently,
- and understand systems deeply enough to know when AI is wrong.
In other words:
AI may not eliminate expertise.
It may make real expertise easier to expose.
Because once everyone has access to AI-generated assistance, human discernment may become the most valuable skill left in the room.
Google’s experiment may therefore reveal something much larger happening across the economy:
The workplace is slowly shifting away from measuring pure production…
and toward measuring judgment.
That may fundamentally change:
- hiring,
- education,
- technical training,
- and what companies consider talent itself.
And honestly?
Nobody fully knows yet how this is going to turn out.
The Grey Ghost