AI-Generated Faces Now Beat Real Ones. Regulators Just Noticed.
Researchers published a warning this week that should end every remaining argument about deepfakes being "detectable." AI-generated human faces have crossed a threshold: they now appear more trustworthy to human observers than photographs of real people.
The implications for fraud, identity verification, legal proceedings, and basic professional trust are significant. And they arrived the same week that India enacted the world's most aggressive deepfake response to date: a three-hour takedown mandate, mandatory traceability for AI-generated content, and new liability for platforms that fail to comply.
Two things are true simultaneously. The technology has outpaced human detection. And governments are starting to respond in ways that will create real compliance burdens for businesses that use AI-generated content.
What the Research Actually Found
The research circulating this week builds on a known phenomenon. Studies have shown for several years that AI-generated faces can fool humans. The new warning from researchers is that the gap has widened: in head-to-head comparisons, AI-generated faces now score higher on perceived trustworthiness than photographs of actual people.
This is counterintuitive until you understand why. Real faces are asymmetrical. They carry the history of the person's life in subtle ways: sun damage, uneven features, the accumulated evidence of expression and age. AI-generated faces are optimized for the statistical center of attractive, trustworthy facial features. They are, in a specific sense, more "perfect" than real faces. And that perfection reads as trustworthy to the human pattern-matching system that evolved to read faces.
The practical consequence is that you can no longer train yourself or your team to "spot a deepfake" by looking at it. That skill does not exist in any reliable form. Detection now requires technical analysis, metadata inspection, and behavioral verification, not visual scrutiny.
For any business that relies on visual identity verification, video calls as a form of client authentication, or human judgment to screen AI-generated content, this research is a direct operational risk assessment. The controls you currently have are insufficient.
Three Hours to Comply: What India Just Built
On February 10, 2026, India's Ministry of Electronics and Information Technology enacted amendments to its IT Intermediary Guidelines that represent the most operationally aggressive deepfake regulation to date.
Four structural changes took effect:
First, a statutory definition of "deepfake," covering any content generated using algorithmic or computational techniques to produce sound, visuals, or both, convincingly enough to pass as authentic representations of real individuals. Standard video corrections, noise removal, and accessibility modifications are excluded.
Second, a reduction in the takedown timeline from 36 hours to three hours. Platforms must remove flagged deepfake content within three hours of receiving a valid complaint. Not 36. Not 24. Three.
Third, mandatory technical disclosure and traceability for AI-generated content. Platforms must implement systems to identify and track AI-generated material, not just respond to complaints about it.
Fourth, expanded compliance and dispute resolution obligations for significant intermediaries, meaning large platforms with substantial user bases.
India is not a fringe regulatory environment. It has 900 million internet users. The platforms subject to these rules include every major social network, every AI content platform, and potentially any business operating content platforms in that market.
The three-hour takedown standard deserves particular attention. The prior standard was 36 hours, which is already aggressive compared to most jurisdictions. Compressing it to three hours means that complaint handling, content review, and removal must be fully automated for any platform at scale. No human review process operates at three-hour compliance windows across billions of pieces of content. This is a mandate for algorithmic moderation.
The Accountability Gap This Creates
India's rules put the obligation on platforms. But the upstream question, who is responsible for the original AI-generated content, remains murkier than regulators have acknowledged.
Here is the gap: the research confirms that AI-generated faces now fool humans reliably. India's rules require platforms to remove content within three hours. But neither the research nor the regulation addresses what happens when AI-generated content is used in contexts that platforms don't moderate: direct messages, email, video calls, court filings, professional communications.
I've spent 19 years as an attorney. I've watched evidence rules evolve across technology cycles. The deepfake problem in legal proceedings is not primarily a social media problem. It's a chain-of-custody problem. If an AI-generated video can be submitted as evidence in a proceeding, and human observation cannot detect that it's synthetic, then the authenticity verification mechanisms courts currently use are structurally inadequate.
Most jurisdictions require authentication of video evidence. Authentication means demonstrating that the evidence is what it claims to be. For video, this has historically meant establishing chain of custody, confirming the recording device, and having the content reviewed by the parties. None of those authentication mechanisms are designed to catch technically sophisticated AI generation. A deepfake produced by current-generation tools, presented with fabricated metadata and a convincing chain of custody narrative, could pass current authentication standards.
This is not hypothetical. Cases involving AI-generated evidence have already appeared in civil proceedings. The question is not whether this will happen in high-stakes contexts. The question is what the legal system will do when it does, and how far the damage goes before rules catch up.
What Businesses Should Actually Do Right Now
Three-hour takedown mandates in India and undetectable deepfakes globally create a set of practical decisions for businesses that interact with AI-generated content, whether they produce it, moderate it, or receive it.
If you produce AI-generated content for marketing, training, or communications: Document your processes now. As regulations expand into additional jurisdictions (the EU AI Act, US state laws, and likely federal action in 2026), the businesses with documented provenance for their AI content will be in defensible positions. The businesses that can't reconstruct which content was AI-generated and when will face both regulatory and litigation exposure.
If you use video calls or visual verification as identity checks: Your current process is compromised. This is not a future risk. Today's tools can generate real-time synthetic video at a quality that defeats human visual inspection. Identity verification for high-stakes transactions needs technical authentication: liveness detection, behavioral analysis, multi-factor verification that doesn't rely on "does this person look real."
If you operate a platform with user-generated content: India's three-hour mandate is a signal of where global regulation is heading. Build the detection and response infrastructure now, before it becomes a compliance requirement in your primary markets. Retrofitting content moderation infrastructure after a mandate takes effect is significantly more expensive than building it proactively.
If you're in legal services: Start tracking the deepfake authentication question in your jurisdiction's evidence rules. Prepare to challenge authentication methods for any video evidence in significant proceedings. And review your own client communication practices: if deepfake video can impersonate your clients, your identity verification protocols need updating.
The Speed Problem Nobody Wants to Say Out Loud
Regulation moves in years. Technology moves in months. The gap between them is where fraud, abuse, and liability accumulate.
India's three-hour mandate is an attempt to solve a speed problem with a speed solution. Force platforms to act fast enough to limit harm. It's a reasonable instinct and an operationally demanding requirement that only well-resourced platforms can meet. Which means it will be enforced unevenly: large platforms will build compliance infrastructure, small platforms will struggle, and bad actors will route around the rules through channels that don't fall under platform jurisdiction.
The research on AI-generated faces doesn't care about India's regulatory framework. The technology will keep improving regardless of what any jurisdiction mandates. The gap between what AI can generate and what humans can detect will continue to widen.
The businesses that survive this environment are not the ones that assume regulators will solve the problem. They're the ones that assume the technology will keep advancing and build their verification, documentation, and risk management accordingly.
The faces you can't tell are fake are already in circulation. Plan for more of them.