Let’s face it: When a company develops artificial intelligence (AI) that can offer us a medical diagnosis, care for our elderly grandparents or autonomously drive a vehicle, ethics aren’t the flashiest elements to focus on. It’s tempting for companies to get caught up in the excitement of creating the latest cutting-edge technology and vow to sort out ethical considerations after the fact. That works just as well, right?
Late last year, I had a conversation with Thomas Arnold, a research associate at Tufts’ Human-Robot Interaction Lab, for my company’s podcast. He said something about AI and ethics that really stuck with me: “Ethics are not ornate toppings to consider at the end of the design process, like sprinkles on a sundae. Ethics need to start at the very beginning, when thinking about what an automated system is and what it’s being designed to do.”
And in my experience, ethical considerations become even more pressing as AI advances. Consider an example from my line of work at a company that builds intelligent virtual assistants (IVAs). Outdated customer service technology is obvious — you immediately know when you’re dealing with a machine, not a human. But when technology advances to a point where it sounds lifelike, that line gets blurred, and ethical considerations must be prioritized.
Ethical AI design is possible (not to mention, necessary) for any company, and it begins from the ground up. It’s time for businesses to abandon the idea that ethics are just the sprinkles on top; here are a few best practices for organizations to ensure ethical design in AI and robotics.
Create interdisciplinary dream teams.
The first step toward ethical design is assembling the right team. Every AI team obviously needs technical talent. But to cover all the bases, it’s critical to bring in other perspectives, too. In my experience, AI experts don’t always have a knack for anticipating what features will seem “creepy,” or determining whether the application’s voice should be male or female. This is where interdisciplinary teams come in.
Thomas’ lab, for example, features not only leaders in computer science and robotics, but also in linguistics, theology and psychology. Technical talent makes it possible to build avant-garde AI applications, but those experts alone can’t always account for how these creations will later interact with people, how receptive society will be or how technology could cross an inappropriate line. Interdisciplinary teams can more comprehensively evaluate an application’s practical and ethical implications.
Even the most cutting-edge creations don’t hold much value if they don’t have a practical or appropriate application. Interdisciplinary teams allow companies to consider these ethical implications from the very beginning.
Know that when it comes to AI, presentation matters.
Once the team is formed, it faces one of the biggest ethical considerations in AI and robotics: presentation.
A robot’s look and sound may seem like a trivial, surface-level detail. But the presentation of an AI application actually holds significant weight. Earlier this year on another episode of our podcast, I spoke with Dr. Michael Littman, a professor at Brown University and co-director of the university’s Humanity Centered Robotics Initiative, about responsible robot design. He emphasized the importance of physical form because humans have “visceral” reactions to different appearances.
Various shapes, sizes, sounds and styles of movement create different expectations for what an AI application is capable of, and affect how humans perceive and interact with it. We see this in our own everyday experiences with AI: A Roomba vacuum inspires a different reaction and expectation than Alexa, which sparks a different reaction than a robot designed to look like a human.
In fact, robots designed to closely resemble humans open up a new ethical can of worms. Humanlike design can lead people to associate human thought and emotion with the robot (a concern for many), and people can grow so attached that they mourn robots when they’re no longer around. It’s critical that the application a team creates conveys the right impression and capabilities.
Be prepared to explain AI’s thinking.
As AI advances, it becomes increasingly integral to our daily lives — from hearing the weather from Alexa in our living room to dodging a cleanup robot at the grocery store. It may even take care of us (subscription required) when we’re old someday. But to truly feel comfortable welcoming AI into all aspects of our lives, we need to trust and, to an extent, understand it.
Transparency and explainability are the final tenets of ethical AI and robotics design, and they’re becoming increasingly important as AI becomes more sophisticated and takes on more important tasks. We may not need total insight into Alexa’s capabilities, or a reasonable explanation for why Alexa says the things it does. But not everything is as low stakes as an Alexa joke; for many emerging applications of AI, we will need greater visibility and insight into their capabilities and decision making.
Self-driving cars are an excellent example. It’s critical that the companies behind autonomous vehicles understand and can explain how the technology makes split-second decisions. The same can be said of AI in medical diagnoses: If AI identifies a problem the doctor missed, the patient and doctor alike will need to know how the technology reached that outcome.
Unintelligible black boxes may be permissible in low-stakes situations, but they won’t cut it when technology touches every element of our lives and makes important decisions. Transparency and explainability are paramount to ensuring ethical design and, ultimately, widespread trust and comfort.
AI has the potential to enhance nearly every corner of our lives, but it will only realize that potential if we feel comfortable letting it. The onus is on companies to create this comfort by integrating ethical considerations into the design process. Businesses can no longer think of ethics as just the sprinkles on top; cutting-edge design must incorporate ethical considerations from the get-go.