President Trump stood before reporters this week and delivered what might be the most honest assessment of artificial intelligence's real power: "If something happens that's really bad, maybe I'll have to just blame AI." The comment came after he dismissed viral White House footage as "probably AI" - footage his own press team had confirmed was real just hours earlier.
Welcome to the liar's dividend, where AI doesn't even need to create convincing fakes to wreak havoc. The mere possibility that everything could be artificial is reshaping accountability in ways that make actual deepfakes seem almost quaint by comparison.
This isn't just political theater. We're watching the emergence of what digital forensics expert Hany Farid calls the deeper problem:
"When you enter this world where anything can be fake, then nothing has to be real. You get to deny any reality because all you have to say is, 'It's a deepfake.'"
The technology driving this crisis has evolved at breakneck speed. According to research from Quartz, generative adversarial networks, the AI systems that pit two neural networks against each other to create increasingly convincing fakes, have transformed from academic curiosity to mass-market reality. What once required Hollywood-level resources now runs on consumer laptops, producing photorealistic faces that fool humans more often than they detect them.
But while the tech world fixates on detection algorithms and watermarking schemes, lawmakers are discovering that different types of AI fakery demand radically different legal approaches. The regulatory landscape emerging across states like California and New York reveals a fascinating split in how we think about AI-generated content.
Election deepfakes, it turns out, get the disclosure treatment. California's recent legislation requires clear labeling of AI-altered political content while still protecting speech rights - though federal judges have already started pushing back, calling such laws "a hammer instead of a scalpel" that stifles legitimate expression. New York takes a similar approach, mandating disclosure for "materially deceptive media" while allowing the content to exist.
Deepfake pornography tells a different story entirely. Here, disclosure is meaningless. No amount of labeling can undo the psychological trauma of seeing your likeness grafted onto explicit content without consent. The numbers paint a stark picture: 98% of deepfake videos online are pornographic, and 99% of those target women. Production of these videos surged 464% between 2022 and 2023 alone.
San Francisco's unprecedented lawsuit against 16 deepfake porn websites this summer signals where this battle is heading. Unlike election cases where free speech provides robust defense, non-consensual intimate imagery occupies legal territory closer to obscenity and harassment. The Violence Against Women Act's 2022 update created federal civil remedies specifically for this problem, recognizing that some AI applications demand prohibition rather than transparency.
The technology itself has become eerily sophisticated. As reported by researchers studying AI receipt fraud, systems like ChatGPT's 4o model now generate fake documents "so convincing that even experienced accounting professionals might miss the deception." When AI can fool experts in controlled settings, what hope do ordinary citizens have in the chaotic environment of social media?
The psychological impact runs deeper than technical capabilities. Multiple studies show that about half of Americans feel "more concerned than excited" about AI's role in daily life, while three-quarters say they trust AI-generated information only "some of the time" or "hardly ever." Yet this healthy skepticism becomes weaponized when bad actors exploit our uncertainty.
Venezuelan officials demonstrated this perfectly when they dismissed video of a U.S. strike on a drug vessel as "almost cartoonish animation" created by AI. Reuters found no evidence of manipulation, but the mere suggestion was enough to muddy the waters. That's the liar's dividend in action - doubt becomes a weapon more powerful than any deepfake.
What's particularly insidious is how this dynamic rewards the least trustworthy actors. As Boston University's Danielle Citron and the University of Texas's Robert Chesney predicted in their prescient 2019 research, "power flows to those whose opinions are most prominent" when truth becomes subjective.
The path forward requires abandoning the fantasy that we can simply build better detection tools and hope the problem solves itself. According to analysis from Deutsche Welle's media training institute, generative AI represents "the ultimate disinformation amplifier," making it possible for anyone to generate false information and fake content in vast quantities. Instead, we need legal frameworks nuanced enough to distinguish between legitimate satire requiring disclosure and harmful non-consensual content demanding prohibition. We need platforms willing to enforce policies beyond corporate liability shields. Most importantly, we need leaders who understand that wielding "it's AI" as a blanket excuse corrodes democratic accountability far more efficiently than any sophisticated deepfake ever could.
The technology will keep improving. The question is whether our institutions can evolve fast enough to preserve the distinction between what's real and what's convenient to believe.