Artificial intelligence (AI) may soon know more about you than you think.
A startup called Hume AI claims to use algorithms to measure emotions from facial, vocal, and verbal expressions. It’s one of a growing number of companies that purport to read human emotions using computers. But some experts say that the concept raises privacy issues.
“Whoever controls these systems and platforms are going to have a lot of information on individuals,” Bob Bilbruck, a tech startup advisor, told Lifewire in an email interview. “They will be able to build profiles for these people that can be used for monetary gain, control of outcomes, or potentially more nefarious macro tracking of people and society.”
Hume says the secret to teaching AI to read emotions is big data. The company says it trains its AI on massive datasets from North American, Africa, Asia, and South America.
“Our vision is a world where AI translates scientific insights into new ways to improve the human emotional experience,” the company writes on its website. “Emotional awareness is the missing ingredient needed to build social media algorithms that optimize for user well-being…”
Hume is one of many companies trying to leverage data to gain insights into human emotions. Companies use emotional monitoring to try to design effective advertisements, Oleksii Shaldenko, a professor who researches AI at the Igor Sikorsky Kyiv Polytechnic Institute in Ukraine, told Lifewire in an email interview. Similar tech is used to evaluate the tone of voice at call centers, monitor driver behavior in automobiles, and measure viewer’s attitude at streaming and production companies.
There are significant potential benefits for users to having AI read their emotions, AI Dynamics chief technical officer Ryan Monsurate told Lifewire in an email interview. He said one use would be to design interfaces that reduce the probability of people getting frustrated or angry with their technology.
A more challenging problem to solve would be to generate appropriate emotional responses to the emotions perceived by AI interacting with humans, Monsurate said.
“Many of us have spoken to our intelligence assistants, and while the quality of the timber pitch and intonation of their voices have improved over the past decade, they are no better at communicating in a way that conveys different emotions,” he added. “I see generative models being able to generate synthetic voices with emotion and with contextually appropriate emotions as models grow in size and complexity, or as we make new breakthroughs in the field of deep learning.”
But the most immediate benefit of emotion-reading tech might be for companies trying to sell stuff. The system SenseR, for example, allows retailers to personalize the in-store experience. Computers watch and analyze the expressions and body language of shoppers. Employees can use the results to nudge a sale in the right direction when prompted by in-store sales staff, Fariha Rizwan, an IT and public relations expert, told Lifewire in an email interview.
“From a retail analytics standpoint, the use of machine vision to track human shoppers can give a retailer insights on in-store engagement durations, interest levels based on heatmaps, store journeys, and shopper demographics,” Rizwan added.
Who Owns Your Emotions?
As companies increasingly turn to AI to read emotions, many potential privacy pitfalls exist. Facial recognition technologies that drive emotion-reading systems tend to operate in public and private locations without the consent of the people being tracked, saving their data, and at times selling that data to the highest bidder, Rizwan said.
“We also don’t know the extent to which these systems are protected from cyberattacks, potentially placing a person’s facial map in the hands of a bad actor,” Rizwan added. “These concerns have initiated a shift in enhanced monitoring, surveillance, privacy disclosures, and accountability.”
The biggest privacy concerns are not related to AI but rather the underlying information sharing frameworks and regulations already in place, argued Monsurate. If companies can monetize your data and use it to manipulate your behavior, then understanding your emotional state will help them better do that.
“What we need are laws to disincentivize this behavior in the first place regardless of what tools they use to achieve their aims,” Monsurate added. “It is not the tools, but the bad actors and our current privacy laws are woefully inadequate.”