3 principles for protecting the world from A.I. bias

Until the late 1960s, we knew very little information about what went into the foods we bought. Americans prepared most food at home, with fairly common ingredients. We didn’t see much need to know more. Then, food production began to evolve. Our foods contained more artificial additives. In 1969, a White House conference recommended the Food and Drug Administration take on a new responsibility—developing a new way to understand the ingredients and nutrition of what we eat.

That task took two decades. It wasn’t until 1990 that the FDA published rules mandating nutrition labels on packaged food. In other words, from the stasis of the ‘60s, and the recognition of what we needed, it took 20 years to get the safeguards in place.

Like the arrival of processed foods, the advent of artificial intelligence marks a new age—and whether it turns out to be good or bad for us will depend on what goes into it.  The difference is, at the pace with which A.I. is developing, we do not have 20 years—or even two—to put safety measures in place. The good news: Businesses can take the first and most critical step of identifying harmful or unacceptable A.I. bias, and then rapidly coalesce around the principles that mitigate it.

A.I. bias is when software does something unintended or something with malintent. In the case of hiring, for example, we could design an A.I. system to look for the best candidates for a role. The A.I. would look for exactly what we specify: relevant work experience, strong educational background, and perhaps community service. Over time, the A.I. could exclude an entire population just because of the classes they took in college. It might do this by drawing a correlation between community service and courses taken, even if that connection isn’t causal in any way. In other words, A.I. could unintentionally lead to poor hiring decisions.

It is not hard to imagine even more egregious scenarios: A developer unintentionally embedding bias in A.I. that excludes a population because of gender. Or, in the case of a bank, A.I. that rejects all loans originating in a certain zip code, without any human knowledge of that decision. Or in retail, a loyalty program only rewarding customers of a certain socioeconomic background.

Human models reflect human biases. Because they do, whatever the intent, we may find that the most critical decisions are being made by an irrational actor: poorly trained software. To combat this, we must proactively address bias and develop and deploy A.I. in a socially responsible way, using a governed approach to protect both individuals and our society.

We must begin to make sure the A.I. we use makes decisions with bias mitigated, particularly when it comes to high-stakes arenas such as health care, public or financial services, and justice.

Fortunately, there is a set of principles we can follow to quickly get us on the path to socially responsible A.I. or A.I. risk management in general.

  1. Fairness: A.I. must represent the values and ethics of the organization that is leveraging it. It should make the decisions that your best employee would make, if they were taking the action. In short, it should be fair and it should be free from bias, based on who created it, where the data came from, or any other factor that may influence equanimity.
  2. Quality: Assuming that it is fair in its intent, A.I. is only as good as the data it analyzes. Remember the expression “garbage in, garbage out”? It is true in the world of data and A.I. Quality is about ensuring that A.I. is performing as expected, which means it is accurate, it is minimizing false positives or negatives, and it has the right inputs and outputs.
  3. Drift: In many businesses, a good decision today may not be a good decision tomorrow. That is the nature of fast-moving and dynamic environments. Therefore, it is critical to understand how A.I. behaves when the environment changes. COVID-19 is a stark reminder that the fundamentals of an environment can change, and A.I. must adjust in real-time.

A.I. can lead to a new era of productivity, personalization, and even equality—but only if it’s well managed, and if businesses are held accountable for how they deploy and manage it. This era demands a new standard of good technology—in A.I. and more broadly—and that’s why any business using A.I. should consider these three key “ingredients.” As A.I. scales, and we get to the point where every business has hundreds if not thousands of A.I. models in production, these will become the modern-day nutrition labels for software. They will help ensure our A.I. is healthy.

Original post: https://fortune.com/2021/07/12/ai-bias-artificial-intelligence-business-protection/

Leave a Reply

Your email address will not be published. Required fields are marked *