AI Disclosure Without Inviting Fraud: A Bill 190 Strategy

Technical recruitment infographic showing the balance between Bill 190 AI disclosure and forensic fraud protection.
Table of Contents
Table of Contents

As of January 1, 2026, the Ontario hiring landscape changed fundamentally. Under Bill 190, employers are legally mandated to disclose the use of artificial intelligence in their recruitment processes, specifically when AI is used to screen, assess, or select candidates.

For tech leaders, this creates a significant security challenge. How do you comply with transparency laws without handing a roadmap to fraudsters?

If your job posting describes your AI detection tools in too much detail, you are effectively telling Prompt Gamers exactly which filters they need to bypass. At STACK IT, we believe that compliance should never come at the cost of technical integrity.

Understanding the compliance mandate

To navigate the new regulations, you must first understand the distinction between Bill 149 vs 190. While Bill 149 laid the groundwork for AI transparency, Bill 190 introduced stricter requirements for vacancy labeling, salary disclosure, and 45-day candidate follow-up.

The AI disclosure requirement specifically triggers when an automated system plays a role in the selection or assessment of a candidate. This includes:

  • Resume parsers. Tools that use AI to rank or score candidates based on keywords.
  • Transcription and analysis tools. Software that transcribes interviews and provides sentiment or competency scores.
  • Automated scoring tools. Any third-party software that issues a pass/fail grade based on AI analysis.

Failing to include this disclosure can lead to provincial violations and 3-year record retention audits. 

However, over-sharing your methods is equally dangerous.

The security risk of over-disclosure

In our deep dive on why automated AI-detectors fail, we established that you can’t fight a bot with a bot. 

When you list specific AI detection tools in your Bill 190 hiring requirements documentation, you are providing adversarial training data to dishonest candidates.

For example, if a candidate knows you use a specific vendor to check for real-time botting, they will simply use a cleaner LLM wrapper designed to avoid that vendor’s specific detection patterns. This is why STACK IT maintains a security through obscurity policy for our clients.

Drafting a protected disclosure

A compliant disclosure should be clear about the existence of AI tools but vague about the criteria and specific vendors used. This satisfies the law without inviting fraud.

The STACK IT standard disclosure for job postings:
STACK IT uses AI-enhanced tools to support initial candidate screening and interview note analysis. All assessments and hiring decisions remain human-led.

This phrasing works because it acknowledges the presence of technology (transcription via BrightHire or resume tracking via Workable) while reinforcing that the Human Insight is the final authority. It does not give the fraudster a target to hit.

Maintaining forensic integrity post-disclosure

Once the disclosure is published, your primary defense moves from the job posting to the interview. Because your detection maneuvers, like these 3 physical tests to spot a deepfake candidate, are human-led actions rather than automated AI selection steps, they do not require specific vendor-level disclosure in the job ad.

We recommend a two-layered defense:

  1. Administrative AI: Use compliant tools for transcription and scheduling to stay organized and audit-ready.
  2. Human Forensics: Use the Profile Turn and Hand Pass during live calls to verify humanity in real-time.

These human-led tests are the only way to catch a high-end imposter who has already optimized their bot to pass your disclosed automated filters.

According to Deloitte’s 2026 Human Capital Trends, organizations that prioritize disinformation security, specifically in their talent pipelines, report 22% higher retention rates among top-tier talent. High-performers do not want to work alongside proxy hires, they want to be part of an elite, verified human team.

A unified strategy

In the 2026 market, compliance is a baseline, but security is the differentiator. You cannot have one without the other. If your process is transparent but vulnerable, you will eventually face the high cost of a bad hire—quantified at over $50,000 per mis-hire.

At STACK IT, our success-based recruiting model integrates Bill 190 compliance directly into a forensic vetting framework. We handle the legal disclosures so you can focus on building your team with absolute certainty that every hire is both authentic and qualified.

Is your process compliant, or is it a vulnerability? Don’t let transparency become your biggest security hole. Download the Forensic AI Hiring Playbook to see our full list of detection tests and access the complete Bill 190 Ontario Hiring Compliance Checklist.

SHARE THIS ARTICLE

Need immediate help? Call (905) 238-9204

Consent