Deep fake and other threats against the truth

Tom Cruise hits a golf ball just before looking into the camera and saying, “If you like what you are seeing, just wait to what’s coming next.” The video went viral on TikTok in February and is a deep fake.

A deep fake is any hyper-realistic image, video, or audio content synthesized by artificial intelligence (AI) that falsely depicts individuals saying or doing something. It carries risks if not used ethically, as you can see in this article from The Defense Post.

We will walk you through some examples of good and bad practices in the use of this technology.

What happens if a deep fake is used with bad intentions?

The consequences are many, mainly when the video is not humorous, as it can lead to identity theft, dissemination of false content, and in extreme cases, incite fear within democracies and distort the reality of audiences. Moreover, the threat that this technology implies for journalism goes against the profession’s ethics and transparency.

Let’s give some context. Before deep fakes, there were synthetic voices. They are used in apps that give us directions, guide us through phone calls by day and relay the news on smart speakers at night. However, these voices are becoming increasingly human-like, leading to misuses when they cross some boundaries. For example, when they are used to replicate not only what we say but how we say it, as this Scientific American’s article proposes.

The term ‘deep fakes’ first became known in 2017, when a Reddit user with the same name began posting digitally altered pornographic videos on the website’s forums. These were manipulated so that the faces of performers were replaced with those of other people, typically celebrities. As a result, Reddit soon became a focal point for the sharing of manipulated porn videos. As a result, the tech media outlet Motherboard published this headline in 2017: AI-Assisted Fake Porn Is Here and We’re All F**ked.

We must understand the extent of this trend: Photographs of the victims – predominantly women – are edited into sex films. Imagine that your face could be one of those. By the time the video is proven to be fake, millions of people will have seen it. But, unfortunately, some will still believe it’s real. That is the case of Helen Mort, a British woman who has gone so far as to call for a change in British laws to criminalize the creation and distribution of deep fakes.

Another prominent example of the risks of this technology happened in the Netherlands a few weeks ago. Dutch MPs were manipulated in a video conference via a deep fake allegedly representing Leonid Volkov, a close associate of Russian opposition figure Alexei Navalny. According to the deputies, the suspicion began when the parliamentary standing committee on foreign affairs recalled that Volkov was in Vilnius, the Lithuanian capital, and the conversation became awkward.

The UK Centre for Data Ethics and Innovation produced a report on deep fakes in 2019 detailing the social threats lurking within technology. It remains the only serious government effort to understand deep fakes, but it has been lost in the political spaces and the age of hyper-information. That is what happens with cases of this type, institutions create laws sometime after the crime, but the damage remains there.

Other uses for deep fakes

Impersonation is nothing new and has even been used to carry out exciting projects, such as how artificial intelligence recreated Franco’s voice in this podcast. We can hear the dictator singing the Macarena.

The brewery Cruzcampo, for its TV campaign “Con mucho acento”, used a deep fake of the deceased Spanish singer Lola Flores. You can see how they did it here. According to the newspaper El País, the campaign was authorized and advised by the daughters of the renowned artist: Lola and Rosario Flores.

This branch of deep fakes, voice synthesizers, involves creating a model of someone’s voice, which can read the text, in the same way, intonation and cadence as the target person. Some voice synthesis products, such as Modulate.ai, allow users to choose a voice of any age and gender rather than imitating a specific target.

As a journalist, what can I do?

Today more than ever, it is necessary to verify data, audios, videos, and other information whose purpose is to supplant, misinform, and disseminate defamatory content.

Social networks are a nest of fake news, so as journalists, we have two leading roles. The first is to educate and raise awareness about the dangers of malicious people sharing false information with the help of these artificial intelligence technologies. The second, to verify everything. The Society of Professional Journalists has a Code of Ethics set out in its eponymous document, in which it states that a journalist should: “Take responsibility for the accuracy of their work; Verify all information before distributing it; Use original sources whenever and wherever possible.”

If you are interested in artificial intelligence and its good uses, look at this list of the top 10 artificial intelligence initiatives from Compromiso Empresarial magazine. We also invite you to check out Datasketch’s initiatives to verify and visualize data and learn about the laws that regulate data here.

 

Original post: https://www.datasketch.co/blog/data-journalism/deep-fake-and-other-threats-against-the-truth/

Leave a Reply

Your email address will not be published. Required fields are marked *