Published on MediaPost, 5 May 2023
Eight years ago, I sat in a classroom at the NASA Ames Research Center and listened to someone tell me that, thanks to advances in facial recognition, pattern matching and AI, within five years it would be impossible to lie.
It’s hard to fathom a more wildly incorrect prediction. Nonetheless, I’ve thought about it many times since then. The reason I keep coming back is that it spurred me to think about the opposite possibility: What does it look like if it is phenomenally simple to lie — if it’s impossible to prove you’re telling the truth?
This is what AI has done, what deepfakes have done, what democratically accessible hyperrealistic audio and video generation has done.
Almost every concern I’ve read about these tools focuses on what can be created: fake porn. Misinformation and disinformation. IP-infringing creations.
Often less-discussed is the effect these content-generation capabilities have on non-AI generated material. As the tools get better and better, and there becomes no way to distinguish between real and fake, it becomes harder to prove that real content is real.
We’ve already seen what this looks like. In 2017, when deepfakes first emerged on the scene, Donald Trump jumped on them as a way to discredit the infamous “Access Hollywood” tape. “We don’t think that was my voice,” he told a Senator, stating he wanted the tape investigated.
His position gained no traction. After all, he had previously confirmed the voice on the recording was his and had even issued a rare apology. Even if he hadn’t, the technology at the time wasn’t good enough for his claim to be plausible.
But now the technology is good enough. Earlier this year, the Federal Trade Commission put out an alert for scammers using voice-cloning technology to imitate a relative or loved one. "All [the scammer] needs is a short audio clip of your family member's voice — which he could get from content posted online — and a voice-cloning program. When the scammer calls you, he'll sound just like your loved one."
When it’s possible to fake anything, it’s possible to claim anything is fake. Incriminating audio or video can be easily dismissed: “It wasn’t me. It’s AI-generated by my enemies.” The burden of proof has now skyrocketed. We have achieved Universal Plausible Deniability.
What are we going to do about this higher burden of proof? One option is for us as individuals to start doing more research — for each of us to slow down, to double check, to not jump to conclusions.
It’s never going to happen. We are hard-wired to jump to conclusions.
What is far more likely is that we’ll become inundated, befuddled, overwhelmed. We’ll believe the things that match our prior expectations and reject the rest.
And what do we do when we are inundated and overwhelmed? We look for a guide, a leader, a soothing port in the information storm.
Enter the Trust Titans: the folks to whom we are going to outsource that burden of proof. We already do this: we trust our media outlets, our favourite commentators, our influencers. Their job is to work out the real story so we don’t have to.
As the floodwaters of content rise ever higher, the role of a Trust Titan becomes ever more important — and many of them will not be up to the task. They will need to double, triple, quadruple check sources, determine who is credible and who is not, look at motivations and historical behaviour patterns. They will become judge and jury. Our job, when we are choosing whom to pay attention to, is to consider this role, and ask ourselves whether we think they can do it. After all, what we’re outsourcing is that most sacred of commodities: the truth.
Kaila Colbin, Certified Dare to Lead™ Facilitator
Founder and CEO, Boma