Deepfakes — Artificial intelligence causes stir on social media, international security

The SIUE School of Business and the Canadian telephone company TELUS Communications recently co-sponsored and hosted the International Telecommunications Society webinar “Deepfakes: The Coming Infocalypse with Nina Schick,” addressing synthetic media and its broad implications.

Nina Schick, an author, speaker and adviser who specializes in how technology is transforming geopolitics and society said deepfakes are a piece of synthetic or fake media that’s been manipulated or entirely generated by artificial intelligence.

“It can be an image, it could be a video, it can even be a piece of audio —  and really, this amazing ability of artificial intelligence to create fake media is a cutting-edge development which [has] only been possible for the last five years or so, thanks to the recent revolution in [deepfake] learning which is moving AI from the realm of science fiction into practical reality,” Schick said.

A recent example would be the deepfake created by Belgian visual effects specialist Chris Ume of Tom Cruise working on his golf swing and shared on social media reported on by ABC news. While something as simple as pretending a celebrity is playing golf doesn’t seem harmful, it could be when you consider what this technology could do to reputations and national security, Schick said.

“The potential for harm is tremendous,” Rep. Congressman Adam Schiff said in an interview with Morgan Radford on TODAY in 2019. “And what psychologists will tell you is if you see a video of someone saying something distasteful or racist or criminal or whatever, even if you’re later persuaded that wasn’t them, you can never completely lose the lingering negative impression of that person.”

Stanford Levin, emeritus professor of economics and finance and consultant for TELUS Communications said that deepfake technology will likely have a big impact on society.

“This is the same sort of thing that Facebook and other social media are struggling with. ‘How do you know that something that was put online is correct? Or you’re getting data or you’re watching something … is it artificial? Or is it real?’” Levin said. “So, we’re going to have to come to terms with how to verify things.”

Schick said the technology that used to take a long time to synthesize or clone voices needs less training data to be created, is becoming more automated and can now be created in seconds, making it accessible to more than just AI researchers.

“Already, five seconds of an audio recording of someone’s voice is enough for AI to learn to clone it. So hypothetically, that means if there is a five second recording of you somewhere like on your camera roll, on an Instagram story, on a LinkedIn post, on a  YouTube video, on a WhatsApp voice recording, that can potentially be used to hijack your identity,” Schick said. “The AI’s ability to clone real people is of course not only limited to audio, amazingly it extends to fake video too, even if that person is already dead.”

Schick used synthetic voice/deepfake videos of long dead artist Salvador Dalí that was created by the Dalí Museum in Florida and a synthetic voice clone of comedian and podcaster Joe Rogan to explain processing fluency of humans.

“If I primed you and told you that the Rogan voice and the Dalí video were fake, could you have believed that they were real? The answer is that many would have, because as humans we’re wired to want to believe something that looks and sounds right,” Schick said. “It’s actually a cognitive bias known as processing fluency. So, it should come as no surprise then that media manipulation has a very long and prolific history as a very powerful tool to shape the collective human perception.”

Corporations are already using AI to develop training videos and business platforms, Schick said, creating a synthetic future and having some positive applications.

“It also offers an opportunity for real social good. Some companies, for instance like VocaliD … are developing synthetic voices …  for those who cannot speak due to debilitating diseases like cancer or stroke, or Parkinson’s,” Schick said. “So, I think you can begin to understand just how profound synthetic media will be for the future of, not only human communication, but actually even commerce.”

Schick said there is no doubt in her mind that we are at the start of an AI paradigm change when it comes to the future of content production, communication and human perception and that the future is synthetic.

“This synthetic future is exciting … in terms of how it will democratize content creation, … what it might mean for the future of creative industries like advertising, entertainment —  but it holds risks, too,” Schick said. “Synthetic media is going to be an amplifier of human intention, and there is no doubt that this powerful technology will be weaponized by malicious actors, it can, it will and already is being weaponized.”

Deepfake pornography emerged in 2017 and the amount of deepfake porn creations doubles every six months, Schick said. The troubling part is its nonconsensual nature and a gendered phenomenon where women are usually the target, but increasingly the majority of deepfake porn is being aimed at regular women, not just female celebrities.

“These are your wives, your colleagues, your friends, and alarmingly, even your daughters. Because now deepfake porn of minors has really started to become a phenomenon as well. All that’s needed to create a deepfake porn is some training data, which in this case would be authentic media of a target which could pretty easily be scraped off social media,” Schick said. “I believe that this is simply a harbinger of things to come. The use of AI in fake porn heralds to me a much larger data privacy and civil liberties issue.”

According to the Cyber Civil Rights Initiative, 48 states, Washington, D.C. and Guam have passed anti-revenge porn laws, which protects someone from being cyberbullied by having sexually explicit photos or videos of them distributed online without their consent.

In October of 2019, the senate passed a stand-alone act, “The Deepfake Report Act of 2019,” which requires Homeland Security to report each year on the uses of deepfake technology. These reports are to include the way this technology is evolving and its potential to cause harm to national security, civil rights, detection methods and much more.

Schick said synthetic media is already corroding trust in all authentic media because if videos can be faked, which we tend to see as an extension of our own perception,then everything can be denied.

“There is a profound societal and political risk as well, especially in the context of an information ecosystem in which we already face an epidemic of mis[information] and disinformation that has become abundantly clear to all of us in the context of the pandemic of the past 12 months or so,” Schick said.

Suman Mishra, associate professor and director of graduate studies in the Department of Mass Communications said while there are a lot of negative aspects to this kind of technology, it will also open up new creative areas in the field of Mass Communications when it comes to film, television and advertising.

“With any technology, I think if you see the history of things, there [is] a lot of good that is done. Ultimately it becomes about intention … if you are using [it] as something to really do something very creative and positive in the world, excellent. This is just another level of sophistication,” Mishra said. “There are a whole lot of people [who will use it for] power or money and they will exploit this technology. Porn and politics [are] always at the forefront of exploiting any technology.”

School of Business Dean Tim Schoenecker said the business school agreed to be the academic sponsor for the deepfake webinar after being contacted by Levin, who also serves as a board member at ITS, once they heard what the topic was.

“While it certainly has an effect on business. I think it’s much, much broader than that and I don’t think it just affects businesses, but it affects citizenship and people’s daily lives,” Schoenecker said. “I would think it could be equally interesting to mass comm students, or pre-law students, or political science students or computer science students. The topic has very broad applicability.”

According to an article by Kristina Libby in Popular Mechanics, deepfakes were created by Dr. Ian Goodfellow in 2014, a PhD student who now works for Apple. The technology is based on GANs, generative adversarial networks. More about how GANs are used to create deepfakes can found at

To watch the full webinar with Nina Schick visit the ITS website,


Original post:

Leave a Reply

Your email address will not be published. Required fields are marked *