Rise of the racist robots prompts artificial intelligence upgrades

Businesses seeking to cut costs and satisfy consumer demands for lightning-fast service are increasingly replacing people with machines in every part of the supply chain.

But regulators are becoming concerned that relying on artificial intelligence could undo decades of work put into making sure consumers are treated fairly regardless of their race or gender.

Financial institutions have become the latest target in a wider crackdown on discrimination in artificial intelligence after the Bank of England launched a campaign to make sure new technologies don’t restrict lending to ethinic minorities.

The Bank’s deputy governor David Ramsden held an industry roundtable with Mastercard, Visa, Capital One, Starling Bank, Experian, and others in October 2021 to discuss the ethics of artificial intelligence. Ramsden warned attendees about the elimination of “human judgement and oversight” in decisions.

Now, the competition regulator is gearing up to publish its own findings on how algorithms are used in wider digital markets.

Its efforts come after a slew of high-profile scandals highlighting racial bias in algorithms. Amazon was criticised for excluding predominantly black neighbourhoods in its expansion of Prime services in the US. Twitter was forced to scrap an image-cropping algorithm after users discovered it was prioritising white faces. Uber has been taken to court in the UK after an employee alleged their account was terminated because its facial recognition software could not identify him due to his race, with a hearing due at the end of the month.

The Competition and Markets Authority (CMA) is scrutinising tech giants’ algorithms to make sure they are not unfairly targeting certain groups. It has been researching the topic for more than a year and is preparing to publish new findings in the coming months.

The CMA has recruited data scientists and engineers to a new unit to support its efforts. These analysts have tracked the algorithms and datasets digital companies use to target individuals with particular services and products. The division is part of the new Digital Markets Unit (DMU) set up to regulate companies such as Google, Facebook, Amazon and Apple.

The CMA has developed new techniques to build competition cases on algorithms and will share its findings with other agencies shortly. Separately, it will publish a paper on the harms and benefits of artificial intelligence and how it should be regulated. Regulators could take direct action against the technology giants’ algorithms when new legislation passes through Parliament to put the DMU on a statutory footing.

A CMA spokesman says: “Much of people’s lives are spent online and many of these activities and the markets that underpin them could not exist without algorithms. While these have enabled gains in efficiency and effectiveness they can negatively impact consumers in various ways.

“That is why the CMA is developing an understanding of this area to address the issues raised.”

History of warnings

The CMA first produced a paper on how algorithms could discriminate against individuals in January 2021. It warned there were “numerous examples” of how algorithms could illegally discriminate against individuals. It found technology could discriminate against consumers by targeting people who live in a particular location, or by targeting advertising toward select groups.

Amazon came under the spotlight in the paper. The CMA pointed to research about the expansion of its Prime service in the US in 2016. The research found Amazon prioritised areas with a high concentration of Prime members when it expanded its free same-day deliveries. But this policy excluded predominantly black neighbourhoods from the service.

The CMA report said: “People with protected characteristics are unevenly distributed geographically. As a result, even simple policies implementing regional pricing or varying services available in different areas could potentially result in indirect discrimination.”

The CMA also warned that online platforms sometimes allowed – or failed to prevent – advertisers from targeting people according to protected characteristics. Research published by ProPublica in 2017 found it was possible to place property adverts that excluded people with a certain “ethnic affinity” on Facebook. The CMA did however point out that Facebook has modified its systems to limit the ability of advertisers to target certain demographics.

The regulators also found that Facebook’s algorithms for delivering ads could be skewed according to gender and race. Advertising to women is more expensive because it is a more competitive market, which means women can receive fewer ad impressions from low-budget advertisers. The CMA cited research from Northeastern University in the US that found advertising campaigns with lower budgets ultimately showed fewer adverts to women on Facebook.

The CMA report said: “This can result in ad campaigns that were intended to be gender neutral – for example for STEM career ads – being served in an apparently discriminatory way to more men because it is more cost-effective to do so when maximising the number of ad impressions for a given budget.”

Facebook has been working to improve the fairness of its artificial intelligence, and that it requires advertisers to certify they understand its policies prohibiting discriminatory practices.

A spokesman for Facebook says: “We stand against discrimination in any form and we are committeed to studying algorithmic fairness. We recently expanded limitations on targeting options for job, housing and credit ads to the UK.”

Alan Davis, partner at Pinsent Masons, says: “All of this is around protected characteristics and whether or not data will be misused through the employment of systems to effectively personalise the products or services or pricing that’s being offered to particular consumers.

“And that could be discriminating on the basis of gender, or sexual orientation, or race. “There’s lots of literature in the US about the activities of some of the digital companies around neighbourhoods or districts where there’s a high proportion of black people as opposed to white people, and if you’re located in that neighbourhood you will get preferential quality of service.

“The laws are there [to stop this] but it will be difficult to detect it. How will regulators know and detect whether a company is targeting someone based on whether they live in a particular part of south London, or a particular suburb of Birmingham?”

The paper published last year by the CMA does not give a direct insight into the investigations currently underway. The examples it cited were largely historical. Rather, it highlighted the potential for discrimination in digital markets, and indicates the kind of consumer harm researchers in its new unit will be seeking to identify.

Banks’ algorithms also came under scrutiny in the CMA’s work. Its researchers found that even if humans are partly involved in making loan decisions, there is still potential for discrimination. As consumers increasingly demand mortgage approvals with the click of a button, banks and other service providers will be looking to make sure their technology doesn’t lock them out altogether.

 

Original post: https://www.telegraph.co.uk/business/2022/02/16/rise-racist-robots-prompts-upgrades-artificial-intelligence/

Leave a Reply

Your email address will not be published. Required fields are marked *