One of the growing problems today is misinformation: the proliferation of fake news and misleading content across social media platforms. While artificial intelligence (AI) helps in its spread, there has been growing proof of how it can be used to curb this problem.
However, more than just the daily news article, misinformation has far-reaching – and often fearsome – implications in more critical fields such as cybersecurity, public safety, medicine, and even science. In fact, there have been published collaborative papers, one appearing in the April 2021 issue of PNAS, tackling misinformation as a result of common human biases and prevailing practices in the critique and release of scientific papers. This even includes respected, peer-reviewed journals.
Now, a new study involving researchers from the University of Maryland, Baltimore, is examining an emerging method of misinformation within the scientific community. They report that it is possible for AI systems to generate misinformation that could fool even experts in fields like medicine and defense, creating materials that are convincing enough.
The study, “Generating Fake Cyber Threat Intelligence Using Transformer-Based Models,” is available in the preprint server arXiv.
Using AI as “Transformers” of Misinformation
To test this new form of fake news, researchers generated AI models that they called “transformers” to generate false news on cybersecurity and COVID-19. The topics were selected due to their relevance for the defense and health industries. The fake news generated by their AI systems was presented to experts for testing – finding that this “transformer-generated misinformation” can fool experts.
In the same manner that researchers used AI to generate fake news, a lot of the existing technologies to fight against it also rely on artificial intelligence. AI systems allow computer scientists to run through large volumes of information, fact-checking them quickly without the need to read each content manually. Without it, the sheer amount of information is overwhelming for a human to process.
The transformers used in the study are similar to Google’s BERT, an open-sourced neural network-based method for NLP training, and OpenAI’s GPT.
Natural language processing or NLP is a subfield of linguistics, computer science, and artificial intelligence concerned with the interactions between computers and human language.
Generative Pre-trained Transformer or GPT, created by OpenAI, a San Francisco artificial intelligence research laboratory, uses deep learning to produce human-like text.
Both AI systems use natural language processing to speed up and improve the processing of a vast amount of written data. NLP even allows systems to generate summaries, translations, and interpretations of these articles.
The Information Age Arms Race
In their study, the University of Maryland, Baltimore, researchers managed to fool cybersecurity experts – people expected to be knowledgeable about cybersecurity attacks and vulnerabilities.
Researchers also warned that a similar transformer AI model could generate misleading medical articles and fool medical experts.
Writing an article in The Conversation, Ph.D. student and co-author of the study Priyanka Ranade says that it might lead to an “arms race” as people behind these misinformation campaigns create even better ways to outsmart existing defense systems against it.
Check out more news and information on Fake News in Science Times.