This fourth installment in our forensic AI interview series explores the limitations of technology in the vetting process. While previous articles have defined the crisis and the mechanics of real-time interview botting, we now address why automated detectors are a dangerous shortcut, and why human intuition remains the ultimate filter.
As noted in the Darktrace State of AI Cybersecurity 2026 Report, 92% of security professionals are now concerned about the impact of agentic bots posing as human contributors.
As technical hiring fraud becomes more sophisticated, many organizations are looking for an easy button. The temptation is to buy another piece of software to solve the problem that software created.
But in 2026, the reality is you cannot fight a bot with a bot.
Relying solely on automated AI detectors creates a false sense of security that leaves your organization vulnerable to the highest level of imposters.
At STACK IT, our guiding principle is human insight > automation. We believe that while AI is a tool, human recruiters are the difference between a ‘perfect’ digital profile (a fake) and a real-world technical asset.
A fundamental flaw in automated detectors
Automated AI detectors work by looking for mathematical patterns and predictable sequences of words or pixel consistencies. However, today’s generative AI is designed specifically to disrupt these patterns. In the current AI arms race, the fraud stack is always one step ahead of the detection tech.
When you rely on a software score to verify a candidate, you are ignoring the most critical signal, intent. A bot can solve a technical challenge, but it cannot explain the why behind a failed project or the nuances of team collaboration with authentic emotion.
This is why we focus on technical precision over resume volume. If your process doesn’t account for the human nuances of the interview imposter crisis, you’re gambling with your tech stack.
The Ai arms race paradox: Ai beats Ai
The most sophisticated fraudsters are using the exact same technology as the detectors to bypass them. They run their deepfakes and AI-generated code through common detectors before the interview to ensure they fall within human probability scores.
This is especially prevalent in the rise of real-time interview botting, where candidates use co-pilots to feed them answers during live calls. An automated tool might miss the subtle eye-tracking shifts or the 2-second latency of a prompt gamer, but a trained human recruiter will not.
Detectors often produce false positives on non-native English speakers or false negatives on high-end deepfakes. In both cases, the automated tool fails to provide the certainty that a scaling tech team requires.
The human advantage is observation in real-time
Human-led vetting succeeds where automation fails because it is adaptive. A bot follows a script and a STACK IT recruiter follows real life.
When we suspect a candidate is using a digital mask or a real-time prompt, we don’t wait for a software report. We deploy tactical maneuvers like physical tests to spot a deepfake candidate and to force a digital glitch in real-time.
- We interrupt a scripted flow with high-context, environment-based questions that an LLM cannot predict.
- We observe eye-line movement to identify if a candidate is reading from a hidden teleprompter.
- We measure the rhythm of a conversation. Authentic technical experts have a specific cadence when recalling past project failures that a botting tool simply cannot replicate.
Compliance and our human decision mandate
Beyond detection accuracy, there is the issue of Bill 190 Compliance. In Ontario, if you use AI to screen or select candidates, you must disclose it.
If your decision to reject a candidate is based purely on an automated AI-detector score, you may be creating a legal and ethical liability. STACK IT’s process remains human-led precisely to ensure that every hiring recommendation is defensible, documented, and based on verified human interaction.
We use AI-enhanced tools like BrightHire to support our recruiters, but the final verdict on a candidate’s authenticity always belongs to the person in the room. This keeps our clients audit-ready while ensuring they only meet candidates who are technically strong and culturally aligned.
Trusting the forensic layer
Automated tools are a secondary check and should never be the primary defense. The only way to protect your team from the high cost of a bad hire is to partner with recruiters who know tech as well as they know people.
At STACK IT, we provide a success-based recruiting model that is built on this forensic approach. We don’t flood your inbox with candidates who passed a bot check. We deliver a small batch of verified professionals who have been met, challenged, and validated by a human expert.
Stop trusting the Easy Button. Start trusting the forensic AI layer. Is your vetting process a security risk? Download the Forensic AI Hiring Playbook to see our full list of detection maneuvers.


