Deepfakes Are Amazing. They’re Also Terrifying for Our Future.

Imagine this: You click on a news clip and see the President of the United States at a press conference with a foreign leader. The dialogue is real. The news conference is real. You share with a friend. They share with a friend. Soon, everyone has seen it. Only later you learn that the president’s head was superimposed on someone else’s body. None of it ever actually happened.

Sound farfetched? Not if you’ve seen a certain wild video from YouTube user Ctrl Shift Face (take a look at the clip above). Since last August, it’s gotten more than 10 million views.

In it, comedian Bill Hader shares a story about his encounters with Tom Cruise and Seth Rogen. As Hader, a skilled impressionist, does his best Cruise and Rogen, those actors’ faces seamlessly, frighteningly melt into his own. The technology makes Hader’s impressions that much more vivid, but it also illustrates how easy—and potentially dangerous—it is to manipulate video content.

What is a deepfake?

The Hader video is an expertly crafted deepfake, a technology invented in 2014 by Ian Goodfellow, a Ph.D. student who now works at Apple. Most deepfake technology is based on generative adversarial networks (GANs).

GANs enable algorithms to move beyond classifying data into generating or creating images. This occurs when two GANs try to fool each other into thinking an image is “real.” Using as little as one image, a seasoned GAN can create a video clip of that person. Samsung’s AI Center recently released research sharing the science behind this approach.

“Crucially, the system is able to initialize the parameters of both the generator and the discriminator in a person-specific way, so that training can be based on just a few images and done quickly, despite the need to tune tens of millions of parameters,” said the researchers behind the paper. “We show that such an approach is able to learn highly realistic and personalized talking head models of new people and even portrait paintings.”

For now, this is only applied to talking head videos. But when 47 percent of Americans watch their news through online video content, what happens when GANs can make people dance, clap their hands, or otherwise be manipulated?

Why are deepfakes dangerous?

If we forget the fact that there are over 30 nations actively engaged in cyberwar at any time, then the biggest concern with deepfakes might be things like the ill-conceived website Deepnudes, where celebrity faces and the faces of ordinary women could be superimposed on pornographic video content.

Deepnudes’ founder eventually canceled the site’s launch, fearing “the probability that people will misuse it is too high.” Well, what else would people do with fake pornography content?

“At the most basic level, deepfakes are lies disguised to look like truth,” says Andrea Hickerson, Director of the School of Journalism and Mass Communications at the University of South Carolina. “If we take them as truth or evidence, we can easily make false conclusions with potentially disastrous consequences.”

A lot of the fear about deepfakes rightfully concerns politics, Hickerson says. “What happens if a deepfake video portrays a political leader inciting violence or panic? Might other countries be forced to act if the threat was immediate?”

With the 2020 elections approaching and the continued threat of cyberattacks and cyberwar, we have to seriously consider a few scary scenarios:

 Weaponized deepfakes will be used in the 2020 election cycle to further ostracize, insulate, and divide the American electorate.

→ Weaponized deepfakes will be used to change and impact the voting behavior, but also the consumer preferences of hundreds of millions of Americans.

 Weaponized deepfakes will be used in spear phishing and other known cybersecurity attack strategies to more effectively target victims.

This means that deepfakes put companies, individuals, and the government at increased risk.

“The problem isn’t the GAN technology, necessarily,” says Ben Lamm, CEO of the AI company Hypergiant Industries. “The problem is that bad actors currently have an outsized advantage and there are not solutions in place to address the growing threat. However, there are a number of solutions and new ideas emerging in the AI community to combat this threat. Still, the solution must be humans first.”

A new peril: deepfake financial scams

Do you remember your first robocall? Perhaps not, considering the automated phone calls were pretty convincing a few years ago, back when most of us didn’t understand what they were just yet. Luckily, those scammy calls have been on the decline: the U.S. Federal Trade Commission reports that robocall complaints fell 68 percent in April and 60 percent in May, compared to the same periods in 2019.

However, audio deepfake technology could easily bolster the deceitful tactic. According to Nisos, an Alexandria, Virgina-based cybersecurity company, hackers are using machine learning to clone peoples’ voices. In one documented case, hackers used deepfake synthetic audio in an attempt to defraud a tech company.

Nisos shared that audio clip with Motherboard. Take a listen.

This content is imported from Third party. You may be able to find the same content in another format, or you may be able to find more information, at their web site.

This came in the form of a voicemail message, which seemed to come from the tech company’s CEO. In the message, he asks an employee to call back and “finalize an urgent business deal.”

“The recipient immediately thought it suspicious and did not contact the number, instead referring it to their legal department, and as a result the attack was not successful,” Nisos notes in a July 23 white paper.

 

Leave a Reply

Your email address will not be published. Required fields are marked *