Published on MediaPost, 31 July 2020.
The purpose of this week’s House hearing on the tech industry was not to discuss misinformation and disinformation. The chief executives of Google, Apple, Amazon and Facebook were there to answer to the Judiciary Committee’s antitrust subcommittee.
But it’s impossible to engage in a robust discussion with or about those moguls, those barons, those tycoons, without the topic of misinformation and disinformation making an appearance.
Indeed, Facebook’s Mark Zuckerberg went there in his opening statement. “We recognise that we have a responsibility to stop bad actors from interfering with or undermining these conversations through misinformation, attempted voter suppression, or speech that is hateful or incites violence,” he said.
“I understand the concerns people have in these areas, and we are working to address them. While we are making progress — for example, we have dramatically improved our ability to proactively find and remove harmful content and prevent election interference — I recognize that we have more to do.”
Forgive me if I’m skeptical. Mark Zuckerberg has been asking for forgiveness rather than permission since long before you and I had ever heard of him — and the strategy has served him exceptionally well.
As Facebook’s market cap has grown to more than $660 billion, the fines and punishments levied against it seem increasingly pointless. Last July, the Federal Trade Commission hit the company with a $5 billion dollar fine — and the stock went up.
So, yeah. Not much incentive for the company to take any meaningful action toward preventing misinformation and disinformation on its platforms.
Which is alarming, because the purveyors of falsehoods are getting better at their jobs — and getting more powerful tools with which to do them.
I first wrote about deepfakes in November of 2018: “It’s a cat-and-mouse game that will only get worse. And the advantage is fully with the fakes. You don’t need a perfect fake video to spread a rumor, sow distrust, feed people’s fears and biases, or undermine attempts at common ground. An OK fake video will do nicely.”
At that time, OK fake videos were all we had. Today, deepfakes have become terrifyingly good. Case in point: last November, the Centre for Advanced Virtuality created a video of Nixon delivering the speech he would have given if the moon lander had crashed. Watch it — and then ask yourself whether you can confidently tell at first look whether a video is real.
My friend Ben Reid — the author of the excellent Memia newsletter — thinks we have nothing to worry about. He believes AI will shortly be able to verify whether videos are real and flag fake ones on the spot.
I’m not so optimistic. Fact-checks and links to further information and verification check marks seem to be powerless in combating a compelling bit of copy or imagery that reinforces something you’re already inclined to believe.
Either way, we shouldn’t rely on AI. After all, misinformation and disinformation have been spreading without the need for deepfakes. The video shared this week promoting hydroxychloroquine was a genuine video of people saying not-genuine things. It was seen 20 million times before it was taken down.
We need to rely on our own discernment. We need to do the work to determine for ourselves how much we can trust what we see. We need to check references, look at the source, weigh what we’re seeing against other evidence that it might or might not be true.
It means we have to get more skilled at assessing and analysing information. It means we have to become more critical consumers of news. It means we have to consistently be prepared to be wrong.
If we want to trust what we read, hear and see, the responsibility is ours.
Ngā mihi mahana,
Kaila
Kaila Colbin, Certified Dare to Lead™ Facilitator
Co-founder, Boma Global // CEO, Boma NZ