TechScape: can AI really predict crime?

In 2011, the Los Angeles police department rolled out a novel approach to policing called Operation Laser. Laser – which stood for Los Angeles Strategic Extraction and Restoration – was the first predictive policing programme of its kind in the US, allowing the LAPD to use historical data to predict with laser precision (hence the name) where future crimes might be committed and who might commit them.

But it was all but precise. The programme used historical crime data like arrests, calls for service, field interview cards – which police filled out with identifying information every time they stopped someone regardless of the reason – and more to map out “problem areas” for officers to focus their efforts on or assign criminal risk scores to individuals. Information collected during these policing efforts was fed into computer software that further helped automate the department’s crime-prediction efforts. The picture of crime that the software presented, activist groups like the Stop LAPD Spying Coalition argue, simply validated existing policing patterns and decisions, inherently criminalising locations and people based on a controversial hypothesis (ie, that where crimes have once occurred they will occur again). The data the LAPD used to predict the future was rife with bias, leading to the over-policing and disproportionate targeting of Black and brown communities – often the same ones they had been targeting for years, experts argue.

About five years into the programme, the LAPD focused on an intersection in a south LA neighbourhood where the late rapper Nipsey Hussle was known to frequent, documents my colleague Sam Levin and I reviewed and first reported on in November revealed. It was the intersection along which he grew up, and later opened a flagship clothing store as an ode to his neighbourhood and a means to move the community forward economically. There, in search of a robbery suspect described only as a Black man between the age of 16 and 18 years old, the LAPD stopped 161 people in a matter of two weeks. Nipsey Hussle had complained of constant police harassment before then, too, saying as early as 2013 that LAPD officers “come hop out, ask you questions, take your name, your address, your cell phone number, your social, when you ain’t done nothing. Just so they know everybody in the hood.” In an interview with Sam Levin, Nipsey’s brother Samiel Asghedom said nobody could go to the store without being stopped by police. The brothers and co-owners of The Marathon Clothing store even considered relocating the store to avoid harassment.

Ultimately, the LAPD was forced to shutter the programme, conceding that the data did not paint a complete picture.

Fast-forward nearly 10 years later: The LAPD is working with a company called Voyager Analytics on a trial basis. Documents the Guardian reviewed and wrote about in November show that Voyager Analytics claimed it could use AI to analyse social media profiles to detect emerging threats based on a person’s friends, groups, posts and more. It was essentially Operation Laser for the digital world. Instead of focusing on physical places or people, Voyager looked at the digital worlds of people of interest to determine whether they were involved in crime rings or planned to commit future crimes, based on who they interacted with, things they’ve posted, and even their friends of friends. “It’s a ‘guilt by association’ system,” said Meredith Broussard, a New York University data journalism professor.

Voyager claims all of this information on individuals, groups and pages allows its software to conduct real-time “sentiment analysis” and find new leads when investigating “ideological solidarity”. “We don’t just connect existing dots,” a Voyager promotional document read. “We create new dots. What seem like random and inconsequential interactions, behaviours or interests, suddenly become clear and comprehensible.”

But systems like Voyager Labs and Operation Laser are only as good as the data they’re based on – and biased data produces biased results.

In a case study showing how Voyager’s software could be used to detect people who “most fully identify with a stance or any given topic,” the company looked at the ways it would have analysed the social media presence of Adam Alsahli, who was killed last year while attempting to attack the Corpus Christi naval base in Texas. Voyager said the software deemed that Alsahli’s profile showed a high proclivity toward fundamentalism. The evidence they pointed to included that 29 of Alsahli’s 31 Facebook posts were pictures with Islamic themes and that one of Alsahli’s Instagram account handles, which was redacted in the documents, reflected “his pride in and identification with his Arab heritage”. The company also pointed out that of the accounts he followed on Instagram “most are in Arabic” and “generally appear” to be accounts posting religious content. On his Twitter account, Voyager wrote, Alsahli mostly tweeted about Islam.

Though the case study was redacted, many aspects of what Voyager viewed as signals of fundamentalism could also qualify as free speech or other protected activity. The case study, at least the parts that we could see, reads like the social media profiles of your average Muslim dad.

While the application may seem different, what the two cases show is the ongoing desire among law enforcement to advance their policing, and the limitations – and in some cases the bias – deeply embedded in the data being used in the systems. Some activists say police employ systems purporting to use artificial intelligence and other advanced technologies to do what it really isn’t capable of doing, that is, to analyse human behaviour to predict future crime. In doing so, they often create a vicious feedback loop.

The main difference is that there’s now an entire sector of tech clamouring to answer law enforcement’s call for more advanced systems. And tech companies that create overt surveillance or policing programmes but also consumer tech companies that the average person interacts with on a daily basis like Amazon are answering the call. Amazon, for its part, specifically worked with the LAPD to give its officers access to its network of Ring cameras. For police the motivation for such partnerships is clear, with such technology giving credence to their policing decisions and potentially making their jobs easier or more effective. For tech companies, the motivation is to tap into revenue streams with growth potential. The lucrative government contract with seemingly endless funding is a hard prospect to resist, especially as many other avenues for growth have started to dry up. It’s why internal employee opposition has not deterred companies like Google, which continues to go after military contracts in spite of years of employee strife.

From the New York Times: “In 2018, thousands of Google employees signed a letter protesting the company’s involvement in Project Maven, a military program that uses artificial intelligence to interpret video images and could be used to refine the targeting of drone strikes. Google management caved and agreed to not renew the contract once it expired.

The outcry led Google to create guidelines for the ethical use of artificial intelligence, which prohibit the use of its technology for weapons or surveillance, and hastened a shake-up of its cloud computing business. Now, as Google positions cloud computing as a key part of its future, the bid for the new Pentagon contract could test the boundaries of those AI principles, which have set it apart from other tech giants that routinely seek military and intelligence work.”

Where does a company like Google, which has expanded its business such that its tentacles are in likely every industry, go to continue to grow its business? Right now, the answer appears to be working with the government.

Readers, I’d love to hear about how you feel about tech companies working with law enforcement to equip them with predictive policing or other surveillance technology.

The wider TechScape

Speaking of surveillance, the advocacy group Surveillance Technology Oversight Project has published a new mapshowing where all the internet-enabled Hikvision cameras in New York City are located. There are about 17,000 sprawled across the boroughs, which the group says could easily be paired with facial recognition, which the group describes as an “error-prone, biased, and invasive technology that has come under growing national scrutiny.” Hikvision is also, as I’ve pointed out in the past, a company that has come under scrutiny for its alleged complicity in China’s ongoing campaign against Uyghurs. Hikvision developed Uyghur detection features and was awarded a Chinese government contract to put facial recognition cameras in front of mosques and re-education camps.

In non-surveillance news, tech companies are not being spared by the recent Omicron surge. As Amazon rolled back its safety protocols in warehouses, a single warehouse in Oregon had a 51 case spike between 28 November and 12 December, according to the Markup. They report that since last spring:

At least five of the company’s warehouses in the state have experienced outbreaks … still-active outbreaks have been ongoing for more than 565 days—longer than at any other workplace in Oregon, including the state’s hospitals and prisons, according to the records. But instead of stepping up safety measures, Amazon has been rolling back pandemic protocols in its warehouses in Oregon and around the country, citing vaccinations as the best way to reduce transmission of the virus.”

In a statement to the Markup, Amazon said the company complies with all local and national health guidelines.

At least 132 employees at SpaceX’s Los Angeles headquarters have tested positive for Covid-19, according to Bloomberg. This would make it the largest workplace outbreak in Los Angeles County. In a memo to workers, SpaceX said the outbreak stems from September when “several employees who work in the same area contracted Covid outside of work at a non-work-related event.” Only one of the 132 employees is suspected of contracting the virus at work.

 

Original post: https://www.theguardian.com/technology/2021/dec/22/techscape-lapd-operation-laser

Leave a Reply

Your email address will not be published. Required fields are marked *