If artificial intelligence researchers and companies were disappointed with President Donald Trump’s A.I. policies, they might be more hopeful about the future of A.I. now that Joe Biden has been elected President.
One of the major sore points with current federal A.I. policy was the amount of funding that the Trump administration has allocated for nonmilitary A.I.-related research. In February, the White House said it would bump non–defense-related A.I. investment to $2 billion annually by 2022, which was seen by some analysts as paltry considering the staggering amount of money that’s needed to produce cutting-edge A.I. research.
Although the Biden administration has not detailed its exact plans for A.I. research, the Democrat’s campaign indicated that it considers general scientific research and development to be crucial to the nation. Biden has proposed to increase the amount of federal R&D spending to $300 billion over four years. Meanwhile, the White House planned to spend $142.2 billion on federal R&D as part of President Trump’s 2021 budget.
The Biden campaign said that “declines in federal R&D spending have contributed to a hollowing out of the American middle class,” and that its proposed investment would benefit “key technologies” like “5G, artificial intelligence, advanced materials, biotechnology, and clean vehicles.”
Echoing sentiments by former Google chief Eric Schmidt, the Biden campaign pointed to China as a major reason the U.S. needs to ramp up technology and science spending, saying that “China is on track to surpass the U.S. in R&D.”
“China’s government is actively investing in research and commercialization across these types of important technology areas, in an effort to overtake American technological primacy and dominate future industries,” the Biden campaign said.
Companies and researchers will have to wait to see how exactly the Biden administration divvies its proposed funding to A.I.-specific initiatives. With so much attention on hot-topic issues like COVID-19, systemic racism, and the economy, both Trump and Biden paid little attention to A.I. as an important area for the nation to focus on during their heated political battles.
Still, previous statements and initiatives give us a window into how the new administration views A.I. and related technologies like facial recognition.
Vice President–elect Kamala Harris has previously called attention to the potential problems of using A.I. in the criminal justice system. Numerous researchers and activists are concerned about facial recognition software’s tendency to work better on white males than women and people of color, and its potential misuse by police departments with a history of racism.
Last December, Harris and other lawmakers like Sen. Cory Booker (D-N.J.) and Sen. Ed Markey (D-Mass.) also called on the U.S. Department of Housing and Urban Development (HUD) to review policies governing the use of facial recognition software in federally assisted housing.
Harris and the other lawmakers were concerned “that the expansion of facial recognition technology in federally assisted housing properties poses risks to marginalized communities, including by opening the door to unchecked government surveillance that could threaten civil rights.”
Of course, Harris has been criticized by activists for her tough-on-crime approach during her stint as a prosecutor that they claim resulted in the over-incarceration of Black people.
But judging by her rhetoric and actions so far, it’s possible the new Biden administration may propose tougher bans on federal use of facial recognition. As for what the Biden administration plans for corporate use of facial recognition, that remains to be seen.
A.I. IN THE NEWS
Sometimes big data isn’t enough. The Washington Post published a deep dive into the final days of the 2020 presidential election that contains an interesting look into the Trump campaign’s decision to emulate former President Barack Obama’s 2012 campaign and its emphasis on data. The Trump campaign “would invest heavily in data, build a massive volunteer network nationwide, and knock on millions of doors,” the report said.
From the article: “Down the stretch, the Trump campaign placed enormous faith in its massive voter contact and mobilization effort, a project that cost more than $350 million. All campaign events, including the president’s rallies, were used as opportunities to mine for new data and bring people into the political system.”
Clearly, that strategy didn’t quite work out as intended.
What is true? Facebook’s A.I. research group published a paper in August detailing a new way of identifying bad actors that have the potential to spread misinformation, tech publication One Zero reported. Although it’s unclear whether the social network used the technology described in the paper during the recent presidential election, it did reveal some insights like the fact “that some of the company’s existing algorithms built to detect fake behavior were laboriously built by hand,” One Zero noted. From the report: “This new tech can only surface posts or accounts that potentially break Facebook’s rules—but how those rules are enforced and when action is taken is mostly left up to Facebook’s human moderators.”
The Pony express. Pony.ai, a startup specializing in self-driving cars, scored $267 million in funding from lead investor the Ontario Teachers’ Pension Plan Board and others including Fidelity China Special Situations PLC and 5Y Capital. Similar to Alphabet’s Waymo, Pony.ai is developing so-called robotaxi services. With the new investment, Pony.ai said it now has a valuation of over $5.3 billion.
Intel’s A.I. acquisition. Intel has acquired the startup Cnvrg.io, which specializes in data science tools for machine learning practitioners, for an undisclosed sum, TechCrunch reported. The semiconductor giant recently bought another machine-learning software tool company SigOpt. TechCrunch posited that the recent deals make sense for Intel to “provide/invest in AI tools for customers, specifically services to help with the compute loads that they will be running on those chips.”
EYE ON A.I. TALENT
Weta Digital hired Joe Marks to be the digital effects company’s chief technology officer. Marks was the executive director for Carnegie Mellon’s Center for Machine Learning and Health.
Integrity Applications, Inc. picked Shalom Shushan to be the health tech company’s CTO. Shushan was previously the vice president of research and development for Crow Electronic Engineering.
EYE ON A.I. RESEARCH
U Can’t Touch This A.I. paper. Researchers from the California Institute of Technology and Purdue University published a paper about using deep learning to solve several kinds of partial differential equations. As MIT Tech Review explained, these partial differential equations are “used to model everything from planetary orbits to plate tectonics to the air turbulence that disturbs a flight, which in turn allows us to do practical things like predict seismic activity and design safe planes.”
The paper’s authors explained the usefulness of using A.I. to solve these kinds of equations more quickly than existing methods, saying “Machine learning methods hold the key to revolutionizing many scientific disciplines by providing fast solvers that approximate traditional ones.”
And it’s not only A.I. researchers who seem excited about the paper. MC Hammer, known for the 1990’s hit “U Can’t Touch This,” tweeted about the paper, MIT Tech Review noted.
FORTUNE ON A.I.
California just passed tougher privacy rules that may reverberate nationwide—By Jonathan Vanian
U.S. visas for Chinese students have all but dried up—By Naomi Xu Elegant
Massachusetts expands ‘right to repair’ law for automakers—By Robert Hackett
3 key takeaways from Uber’s latest pandemic-impacted quarter—By Danielle Abril
On ethics, transparency, and hype in A.I. Although researchers are excited about A.I.’s potential to improve health care, the technology’s so-called black box systems “raises ethical issues that are paramount and fundamental in order to avoid harming patients, creating liability for health care providers, and undermining public trust in these technologies,” Grant Thornton Public Sector director of health and health informatics Satish Gattadahalli wrote in an opinion piece published by STAT News.
Gattadahalli outlines some steps health care firms and researchers can do to improve A.I.’s transparency problems, including establishing digital ethics committees, subjecting algorithms to peer review, and testing the A.I. systems in “controlled experiments that are blinded and randomized.”
And it’s not just the health care industry that faces these ethical dilemmas posed by A.I. systems.
Fortune’s David Morris probed some of the concerns researchers have about Tesla’s“Full Self-Driving” features for its automobiles. There are concerns that describing Tesla’s technology as “full self-driving” overhypes the software’s capabilities.
From the article:
By industry standards, Tesla’s system is considered an advanced driver assistance system, or ADAS, and not autonomous driving technology. There is hope that ADAS systems will increase road safety overall. But there is also concern that if drivers place too much faith in assistive technology, they may become inattentive, undermining those safety benefits.
EAR ON A.I.
This week on Fortune‘s Brainstorm podcast, we explore the technology that’s helping fight COVID-19.
Brian O’Keefe speaks with Nvidia’s VP and General Manager of Healthcare. The company’s A.I. platforms are driving not only the search for a vaccine, but also developing therapeutics, COVID-19 testing, and more.
Then, Michal Lev-Ram points out that even once a vaccine is discovered, governments have to figure out the logistics of distribution. Lev-Ram speaks with Qualtrics President Zig Serafin about these problems. Listen to the episode here.