The Rise of Real-Time Interview Botting

Technical recruiter using forensic AI detection tools to identify real-time interview botting and deepfake artifacts during a candidate vetting session.
Table of Contents
Table of Contents

This second installment in our forensic series moves from the static deception of the application stage to the dynamic, real-time fraud occurring during live technical interviews.

In our first article, we explored the broader AI hiring fraud and Bill 190 vetting landscape and how current practices leave teams exposed. Now, we go deeper into the specific mechanics of real-time botting.

For years, the biggest threat to technical hiring was an embellished resume. Now, that threat is primitive. While many firms are still struggling to filter out AI-polished resumes, the most sophisticated fraudsters have moved their operations directly into your live video calls.

We are seeing the rise of real-time interview botting—a method where candidates use LLM-powered co-pilots to feed them answers, code snippets, and even behavioral cues during a live interview.

If your hiring process relies on a candidate’s ability to talk tech without forensic verification, you may not be hiring a software engineer. You might be hiring a prompt engineer.

The mechanics of the co-pilot fraud

An interview bot isn’t a single tool, it’s a stack designed to simulate technical fluency by bypassing a recruiter’s judgment. The goal is to ensure the candidate never stumbles on a technical definition or a logic puzzle, creating an illusion of expertise that doesn’t exist.

Transcription overlays

Software listens to the interviewer’s question, transcribes it instantly, and sends it to a private LLM.

Real-time prompting

Within seconds, the LLM generates a response displayed on a hidden monitor or teleprompter.

The most difficult tell for a candidate to hide is the neuro-linguistic signature of their gaze. When humans retrieve an episodic memory (e.g., “Tell me about your hardest bug”), our eyes naturally drift, usually upward or into the distance, as the brain reconstructs visual and spatial details.

In contrast, a candidate reading an AI overlay will display horizontal saccades. That is, rapid, rhythmic left-to-right eye movements across the screen. Recent Baycrest Cognitive Research (Feb 2026) confirms that natural memory recall is preceded by a specific burst of eye movements that is physically impossible to replicate while simultaneously reading a scrolling script.

Voice synthesis

Advanced users utilize AI voice filters to mask their own voice with a clone that sounds more local or authoritative to hide identity or location.

Just like AI-generated resumes, AI voices are often too perfect. Human speech is messy and filled with micro-prosodic inconsistencies.

In a nutshell, AI models exhibit prosodic regularity. That means the rhythmic timing of syllables is mathematically consistent.

So all you need to do is listen for spectral envelope divergence. That’s just a scientific way of saying that, if a candidate’s pitch and volume remain perfectly stable for a long technical explanation, you are likely listening to a synthetic stream. It’s not a guaranteed method, but it’s a strong tell.

This creates a technical performance that lacks any foundation in actual delivery. This performance gap is a primary driver behind the rising cost of a bad hire, which we’ve quantified at over $50,000 in lost time and productivity. That’s before accounting for the $1,000 per day in lost delivery momentum.

Why buzzwords are dead

In the previous era of recruiting, a hiring manager could gauge expertise by listening for specific architectural nuances. In 2026, the bots have mastered those nuances.

If you ask a candidate, How do you handle race conditions in a distributed system?, a bot can generate a textbook answer in under three seconds. The signal is no longer the answer itself, it is the process the candidate uses to arrive at that answer.

To combat this, STACK IT recruiters have moved away from template-based screening. We use a specific set of questions to ask candidates designed to break a bot’s logic by requiring real-world project retrospectives and layman’s terms explanations that scripts cannot simulate.

Forensic detection: spotting the bot in real-time

Our Forensic AI Hiring Playbook was built to identify the subtle glitches that occur when a human is being assisted by an AI bot. Here is what our team looks for during the technical vetting process:

  1. We monitor for a 2–3 second delay between the question and the start of the answer. We also look for consistent eye movement toward a specific off-camera quadrant where a teleprompter or secondary screen is likely located.
  2. Bots are excellent at what, but struggle with why not. We ask candidates to defend a bad architectural choice or explain a tradeoff that didn’t work. A scripted candidate will often struggle to pivot because their LLM prompt was designed for a best-case scenario.
  3. As detailed in our playbook, we use maneuvers like the Profile Turn and the Hand Pass to disrupt deepfake overlays and AI video mapping. By requiring a 90-degree turn, we force the AI to re-render the profile, which often results in visible digital artifacts.

According to the LexisNexis 2026 Cybercrime Report, AI identity fraud accounted for 11% of total fraud globally in 2025. That’s an eight-fold increase from the previous year. Furthermore, research from the Cyble 2026 Executive Threat Monitoring report reveals that AI-powered deepfakes were involved in over 30% of high-impact corporate impersonation attacks in the last 12 months.

Bill 190 and AI transparency

As an employer, you are now operating under strict Bill 190 hiring requirements. This means you must disclose your use of AI in the hiring process, including the tools you use to detect their AI.

The challenge is being transparent without giving fraudsters a security audit of your detection methods. At STACK IT, we manage this by disclosing the use of AI-enhanced tools like BrightHire for recording and transcribing, while keeping our forensic questioning techniques as a guarded internal standard. This ensures we remain audit-ready without making our clients a target for sophisticated prompt gamers.

Vetting is no longer optional

The rise of real-time interview botting means that hiring for speed is now the fastest way to fail. In the 2026 market, you cannot afford to skip the forensic layer of technical recruitment.

At STACK IT, we send you verified humans who have been pressure-tested against the latest fraud stacks. We vet for technical precision and cultural fit so that your team can focus on outcomes, not fixing the fallout of a fraudulent hire.Don’t get gamed by the next ‘perfect’ candidate. Download the Forensic AI Hiring Playbook to see our full list of detection maneuvers and the Bill 190 Ontario Hiring Compliance Checklist.

SHARE THIS ARTICLE

Need immediate help? Call (905) 238-9204

Consent