In a post-truth world, synthetic media and photorealism are a boon to ‘fake news industry’

In a post-truth world, synthetic media and photorealism are a boon to ‘fake news industry’

0

Some snapshots may look like people you’d know. Your daughter’s best friend from college, maybe? That guy from human resources at work? The emergency-room doctor who took care of your sprained ankle? One of the kids from down the street?

Nope. All of these images are “deepfakes” – the nickname for computer-generated, photorealistic media created via cutting-edge artificial intelligence technology. They are just one example of what this fast-evolving method can do. (You could create synthetic images yourself at ThisPersonDoesNotExist.com.) Hobbyists, for example, have used the same AI techniques to populate YouTube with a host of startlingly lifelike video spoofs – the kind that show real people such as Barack Obama or Vladimir Putin doing or saying goofy things they never did or said, or that revise famous movie scenes to give actors like Amy Adams or Sharon Stone the face of Nicolas Cage.

All the hobbyists need is a PC with a high-end graphics chip and maybe 48 hours of processing time.

It’s good fun, not to mention jaw-droppingly impressive. And coming down the line are some equally remarkable applications that could make quick work out of once-painstaking tasks: filling in gaps and scratches in damaged images or video; turning satellite photos into maps; creating realistic streetscape videos to train autonomous vehicles; giving a natural-sounding voice to those who have lost their own; turning Hollywood actors into their older or younger selves; and much more.

Deepfake artificial-intelligence methods can map the face of, say, actor Nicolas Cage onto anyone else – in this case, actor Amy Adams in the film Man of Steel.

Yet this technology has an obvious – and potentially enormous – dark side. Witness the many denunciations of deepfakes as a menace, Facebook’s decision in January to ban (some) deepfakes outright and Twitter’s announcement a month later that it would follow suit.

“Deepfakes play to our weaknesses,” explains Jennifer Kavanagh, a political scientist at the RAND Corporation and co-author of Truth Decay a 2018 RAND report about the diminishing role of facts and data in public discourse. When we see a doctored video that looks utterly real, she says, “it’s really hard for our brains to disentangle whether that’s true or false.”

And the internet being what it is, there are any number of online scammers, partisan zealots, state-sponsored hackers and other bad actors eager to take advantage of that fact.

“The threat here is not, ‘Oh, we have fake content!’” says Hany Farid, a computer scientist at the University of California, Berkeley, and author of an overview of image forensics in the 2019 Annual Review of Vision Science.

Media manipulation has been around forever. “The threat is the democratisation of Hollywood-style technology that can create really compelling fake content.”

It’s photorealism that requires no skill or effort, he says, coupled with a social-media ecosystem that can spread that content around the world with a mouse click.

Digital image forensics expert Hany Farid of UC Berkeley discusses how artificial intelligence can create fake media, how it proliferates and what people can do to guard against it.

The technology gets its nickname from Deepfakes, an anonymous Reddit user who launched the movement in November 2017 by posting AI-generated videos in which the faces of celebrities such as Scarlett Johansson and Gal Gadot are mapped onto the bodies of porn stars in action.

This kind of non-consensual celebrity pornography still accounts for about 95 per cent of all the deepfakes out there, with most of the rest being jokes of the Nicolas Cage variety.

But while the current targets are at least somewhat protected by fame – “People assume it’s not actually me in a porno, however demeaning it is,” Johansson said in a 2018 interview – abuse-survivor advocate Adam Dodge figures that non-celebrities will increasingly be targeted, as well. Old-fashioned revenge porn is a ubiquitous feature of domestic violence cases as it is, says Dodge, who works with victims of such abuse as the legal director for Laura’s House, a non-profit agency in Orange County, California.

And now with deepfakes, he says, “unsophisticated perpetrators no longer require nudes or a sex tape to threaten a victim. They can simply manufacture them.”

Then there’s the potential for political abuse. Want to discredit an enemy? Indian journalist Rana Ayyub knows how that works: In April 2018, her face was in a deepfake porn video that went viral across the subcontinent, apparently because she is an outspoken Muslim woman whose investigations offended India’s ruling party.

Or how about subverting democracy? We got a taste of that in the fake-news and disinformation campaigns of 2016, says Farid. And there could be more to come. Imagine it’s election eve in 2020 or 2024, and someone posts a convincing deepfake video of a presidential candidate doing or saying something vile. In the hours or days, it would take to expose the fakery, Farid says, millions of voters might go to the polls thinking the video is real – thereby undermining the outcome and legitimacy of the election.

Meanwhile, don’t forget old-fashioned greed. With today’s deepfake technology, Farid says, “I could create a fake video of Jeff Bezos saying, ‘I’m quitting Amazon,’ or ‘Amazon’s profits are down 10 per cent.’” And if that video went viral for even a few minutes, he says, markets could be thrown into turmoil. “You could have global stock manipulation to the tune of billions of dollars.”

And beyond all that, Farid says, looms the “terrifying landscape” of the post-truth world, when deepfakes have become ubiquitous, seeing is no longer believing and miscreants can bask in a whole new kind of plausible deniability. Body-cam footage? CCTV tapes? Photographic evidence of human-rights atrocities? Audio of a presidential candidate boasting he can grab women anywhere he wants? “Deepfake!”

Deepfake video methods can digitally alter a person’s lip movements to match words that they never said. As part of an effort to grow awareness about such technologies through art, the MIT Center for Advanced Virtuality created a fake video showing President Richard Nixon giving a speech about astronauts being stranded on the moon.

Thus the widespread concern about deepfake technology, which has triggered an urgent search for answers among journalists, police investigators, insurance companies, human-rights activists, intelligence analysts and just about anyone else who relies on audiovisual evidence.

  • A Knowable Magazine report
About author

Your email address will not be published. Required fields are marked *