As the use and power of artificial intelligence increases, the risk for AI to become the hacker, rather than the victim also grows, according to a new report.
The problem is two-fold, wrote Bruce Schneier, a fellow at the Berkman Center for Internet and Society at Harvard University, in “The Coming AI Hackers,” a recent report from Harvard’s Belfer Center for Science and International Affairs. “One, AI systems will be used to hack us. And two, AI systems will themselves become hackers: finding vulnerabilities in all sorts of social, economic, and political systems, and then exploiting them at an unprecedented speed, scale, and scope,” Schneier said. “It’s not just a difference in degree; it’s a difference in kind. We risk a future of AI systems hacking other AI systems, with humans being little more than collateral damage.”
He pointed to several public-sector examples where this threat is already becoming apparent. In one, researchers used a text-generation program to submit 1,000 comments to a request for input on a Medicaid issue, effectively fooling Medicaid.gov workers who accepted them as real concerns from human beings. (The researchers identified the comments for removal so as not to affect the policy debate.)
They’re similar to “persona bots,” or AI posing as people on social media and in other digital groups. They interact normally but occasionally make a politically charged post. “Persona bots will break the ‘notice-and-comment’ rulemaking process by flooding government agencies with fake comments,” Schneier said, potentially affecting public opinion. “It’s not that being persuaded by an AI is fundamentally more damaging than being persuaded by another human, it’s that AIs will be able to do it at computer speed and scale.”
Governments tend to use AI to make their processes more efficient. For instance, in the United Kingdom, a Stanford University student built a bot to automatically determine eligibility and fill out applications for services such as government housing. AI is also used to inform military targeting decisions, the report added.
“As AI systems get more capable, society will cede more — and more important — decisions to them,” Schneier said. “They already influence social outcomes; in the future they might explicitly decide them. Hacks of these systems will become more damaging.”
Sometimes hacks occur because of backdoors purposely built into technology — something that companies Huawei and ZTE have been suspected of doing at the behest of the Chinese government. Other times, however, they happen accidentally.
Someone who wanted a robot vacuum to stop bumping into furniture retrained it by rewarding it for not hitting bumper sensors, but the AI just had the vacuum drive backward because there were no sensors on the back of the vacuum. That may seem harmless, but Schneier said it points to a greater problem.
“Any good AI system will naturally find hacks,” he said. “If there are problems, inconsistencies, or loopholes in the rules, and if those properties lead to an acceptable solution as defined by the rules, then AIs will find them.”
To that end, he predicted that in less than a decade, AI will routinely beat humans at hacker “capture the flag” events. That’s because human capabilities will largely remain static while technologies will constantly improve. “It will be years before we have entirely autonomous AI cyberattack capabilities, but AI technologies are already transforming the nature of cyberattack,” Schneier said.
The best defense against AI-turned-hacker is people, he concluded. “What I’ve been describing is the interplay between human and computer systems, and the risks inherent when the computers start doing the part of humans,” Schneier said. “And while it’s easy to let technology lead us into the future, we’re much better off if we as a society decide what technology’s role in our future should be.”
Original post: https://gcn.com/articles/2021/04/26/ai-hackers.aspx