National AI strategies lack planning for operational investments, are far more aspirational than practical and fail to consider funding realities, according to a new report by internationally-renowned think tank the Brookings Institution.
Most government plans on how and where to invest in artificial intelligence (AI) are missing critical details about how they will be implemented, according to the independent study.
The quest for domination in AI is “the space race of our time”, with significant resources being invested globally by governments in building capabilities in this area, the US-based Institution’s report says.
Researchers assessed 34 national plans on AI to obtain a snapshot of how national governments are thinking about using AI and assess the global state of investment. Most of the plans covered public sector functions and sectors of the economy that could benefit from AI; how to build AI capacity; governance concerns; data management opportunities and challenges; and algorithmic design challenges.
They revealed that most governments saw similar opportunities in AI, with health care, technology, agriculture, and manufacturing flagged as having the greatest opportunity for transformation. The strategies also painted a similar picture of risk, including the need for regulatory frameworks for data, the impact of algorithms on social inequality, and the need to increase transparency in the operation of AI systems.
Governments recognise their role in building platforms and programs that support data sharing between the public sector and external stakeholders in order to speed up innovation, and understand that they needed to plan for significant investment in research and development, the study states.
Italy was found to have the most comprehensive plan, followed by France, Germany, New Zealand, and the US, though this ranking reflected only breadth of coverage, not level of sophistication.
However, most of the plans lacked critical elements, the research found, raising particular concerns over the attention paid to execution. For example, the plans did not outline who would be responsible for implementation, or set out timeframes or the metrics that should be used to gauge performance.
The researchers also criticise the strategies for ignoring funding realities. AI will significantly impact local government revenues – as an example, the report forecasts a fall in income from speeding fines when autonomous vehicles are introduced – yet the plans fail to acknowledge this, they say.
None of the plans had a communications strategy, the report notes. The public sector needs to drive the conversation on how AI will impact its regions, cities and communities; but beyond having the plans posted on websites, governments have not set out how they’ll communicate with the public on the implications and opportunities.
“In short, most of these plans remain far more aspirational than practical. While we remain excited about the prospects of AI to enable advanced solutions to address challenges and realise opportunities for innovation, our excitement should be tempered with the fact that the devil is in the details and, to be frank, the public sector has a chequered track record of large-scale systems implementations,” it concludes.
A separate paper from the institute examines the relationship between the EU and the US – a major trading partner and world leader in AI. It suggests that, as the world’s leading economies with strong ties grounded in common values, the EU and US could provide a global model for AI governance.
While Europe lags in terms of digital adoption as well as development of AI, the US is the world’s leader in AI innovation and investment, and has a strong history of working with the EU on economic, security and innovation opportunities, making them natural partners, the paper says.
The think tank also notes that a failure to deepen transatlantic cooperation on AI risks ceding ground to the vision of AI governance promoted by China, which is using AI to monitor and detain ethnic minorities and exporting its approach to other governments with authoritarian mindsets.
Key US agencies have argued for a light-touch approach to regulating AI, in contrast to the EU’s more interventionalist approach. But at the end of last month, the US stepped back from going its own way entirely – reversing its decision not to participate in the G7’s Global Partnership on AI.