Scientists are creating AI that can detect “anger or fear” in a public area

Korean scientists are creating an AI called 5G-I-VEmoSYS, which can read human emotions via wireless signals and body movement

The dawn of 5G communication technology comes with questions about the imminent future. The smart era can be seen in cities, transport systems, our personal devices and how we’re tracking COVID-19 across the world. Robots can even participate in delivering therapy to humans, which is one of the most delicate, emotion-based exchanges that exists.

If not for the internet and AI, vaccine candidates would take months to find. AI that examines data and narrows down outcomes have empowered policymakers to understand COVID-19’s impact, before it happens.

When it comes to our personal devices, phones are at their sharpest. They adjust lighting, respond to our habit changes, listen out for phrases and supply algorithm-chosen options for our entertainment. A whole world of sci-fi work imagines what could happen when humanity gives tech the power to read and interpret nuanced, live, human emotion.

Basically, what could happen when AI develops a deep-rooted emotional intelligence?

The AI can recognise atleast five kinds of emotion

Professor Hyunbum Kim from Incheon National University, Korea, have developed 5G-I-VEmoSYS. This system can recognise at least five kinds of emotion (joy, pleasure, a neutral state, sadness, and anger) and is composed of three subsystems dealing with the detection, flow, and mapping of human emotions.

They’re calling the detection software Artificial Intelligence-Virtual Emotion Barrier, or AI-VEmoBAR. This uses the reflection of wireless signals from a human subject to detect emotions. This emotion information is then handled by the system concerned with flow, called Artificial Intelligence-Virtual Emotion Flow, or AI-VEmoFLOW, which enables the flow of specific emotion information at a specific time to a specific area.

Finally, the Artificial Intelligence-Virtual Emotion Map, or AI-VEmoMAP, utilises a large amount of this virtual emotion data to create a virtual emotion map that can be utilised for threat detection and crime prevention.

The team explain that when a serious emotion, such as anger or fear, is detected in a public area, the information is rapidly conveyed to the nearest police department or relevant entities who can then take steps to prevent any “potential crime or terrorism threats.”

But there are problems with using this AI for crime detection

The system suffers from serious security issues such as the possibility of illegal signal tampering, abuse of anonymity, and hacking-related cyber-security threats. Further, the danger of sending false alarms to authorities remains – especially when it comes to the detection of ethnic minorities or crimes that did not yet take place.

While intent is enough for most law enforcement authorities to make a report, how can the AI be followed up on fairly if there are biases in the perception of crime?

A report found that one in five police in the US have an anti-Black bias.

While these concerns do put the system’s reliability at stake, Professor Kim further commented: “This is only an initial study. In the future, we need to achieve rigorous information integrity and accordingly devise robust AI-based algorithms that can detect compromised or malfunctioning devices and offer protection against potential system hacks.

“Only then will it enable people to have safer and more convenient lives in the advanced smart cities of the future.”

 

Original post: https://www.openaccessgovernment.org/creating-ai/103633/

Leave a Reply

Your email address will not be published. Required fields are marked *