Information manipulation has been around since Chinese general Sun Tzu wrote “The Art of War” in 550 BC. The Russians routinely use disinformation tactics to destabilize democracies. Events like the 2020 U.S. elections or COVID-19 vaccinations highlight how political opponents and rogue nations actively practice disinformation campaigns to undermine confidence in governments and science, sowing fear and distrust. The disinformation machine is said to cost the global economy $78 billion yearly.
The good news is that we’re getting better at detecting deep fakes. On a recent webinar, Raymond Lee, CEO of FakeNet.ai spoke about ways to spot synthetic videos and how to prevent falling victim to deep fakes.
From Disinformation to Deep Fakes
Disinformation tactics are continuously evolving owing to advancements in artificial intelligence (AI) and machine learning (ML) which are giving rise to newer, more conniving—and more convincing—methods of information manipulation. We’ve all heard the age-old saying, “Seeing is believing.” Criminal-minded and politically motivated organizations are now creating videos or audio recordings (a.k.a. deep fakes) that look and sound like the real deal. You might recall the movie Forrest Gump, where Tom Hanks meets President Kennedy; producing digital effects like these no longer costs a fortune or requires specialized technical expertise. In 2021, we now have free mobile apps and user-friendly software that can doctor videos and create false narratives without needing to write a single line of code.
The flames of what the FBI calls the emergence of “synthetic content” (defined as fake audio, video, text or images) are being fanned by social media and spreading like the proverbial wildfire. Election-related manipulated videos alone have grown 20x since 2019. Soon, it will be extremely hard for ordinary individuals to determine truth and reality from falsehood and fiction. The FBI confirms that synthetic content is being exploited by cybercriminals and nation-states to advance their malicious goals and “foreign influence operations.”
Cheapfakes Versus Deep Fakes
Synthetic videos began as “cheapfakes,” an entertainment tool made by amateur hobbyists who liked to graft ordinary faces on movie stars and make public figures say unlikely things. But bad actors leverage this low-tech method to quickly modify or join two unrelated videos. For example, a viral video that zoomed in on Joe Biden’s white shirt during his first presidential debate claimed that he was wearing a wire and was being fed instructions.
As with most things digital, advances in AI and ML are now maturing, giving rise to videos that are near-perfect and virtually undetectable as fake by security filters. Generative adversarial networks (GANs) are a technology in which two ML models—one designed to create forgeries and the other to detect forgeries—are pitted against each other to create the most flawless, realistic fake video. It’s not hard to imagine that a well-crafted deep fake can evade detection technology and then go on to crash the stock market, cause large-scale civil unrest or overthrow an election. The FBI warns that deep fakes can lead to business identity compromise (BIC) attacks that can result in significant financial and reputational damage.
Manipulation Tactics Used by Deep Fake Authors
Humans are the root cause of 85% of security breaches; attackers use social engineering and phishing to manipulate users into performing an action like clicking a link, opening a file attachment or entering their credentials. In the context of a deep fake, the exploit means are the same: misinform the user, exploit their emotions, lead them to a certain action. To coerce users into taking these actions, deep fake creators can use information manipulation methods such as:
- Missing Context: These types of deep fake videos are usually unaltered or unedited; however, the context in which they are presented is often misleading. For example, a black-and-white video that allegedly predicted the COVID-19 pandemic back in the 1950s went viral on social media. The video was actually made in 2020.
- Deceptive Editing: This technique involves omission; a large portion of a video is removed and the remaining media is presented as a complete narrative. This can be done by splicing disparate videos together in a bid to alter or distort the narrative.
- Malicious Transformation: This method involves doctoring or altering frames of a video and dubbing audio to deceive and manipulate the viewer. This mostly involves the use of sophisticated AI tools to create high-quality fake images and videos. A deep fake video of a convincing actor portraying himself as the real actor Tom Cruise proves the scary potential of this technology.
Best Practices to Identify and Mitigate Synthetic Content
Detecting deepfakes is difficult and the technology is maturing with time, but the situation is not all doom and gloom. U.S. military research agency DARPA has already announced its commitment to detect and filter out synthetic content using the same AI tools to identify suspicious and false media streams. Internet behemoths like Facebook, Microsoft, Google and Twitter are advancing their mission to detect and block deep fakes. As consumers and businesses, we can help too. The FBI recommends following these best practices:
- Practice the SIFT method while consuming online content: stop, investigate the source, find trusted coverage, trace the original content.
- Be extremely wary of malicious influence, especially on topics that are divisive or inflammatory.
- When in doubt, validate or corroborate facts through multiple, independent sources of information.
- Never assume someone to be real based on their online persona (photographs, audio, images).
- Use multifactor authentication (MFA) where possible, especially on shared business accounts.
- Foster a resilient ecosystem by training and encouraging users to flag suspicious activity or behavior. For example: your CEO suddenly calling you and authorizing you to make a large wire transfer.
University College London is calling deep fakes the most serious AI crime threat. Like phishing, deepfakes, too, will invariably permeate our everyday lives. Businesses, media and public institutions must recognize this growing, insidious menace and do their part to create awareness and work towards establishing a common framework that mandates transparency and authenticity.