The Covid-19 outbreak has changed the world in a myriad of unforeseen ways, and experts in every field are scrambling to predict its impact across our health care and financial systems, as well as on our lives more generally.
Among those experts are AI’s data scientists. After all, these sorts of large-scale, big-data problems are where artificial intelligence should shine. But AI scientists are struggling with exactly the same unknowns that medical doctors are — the “novel” part of the novel coronavirus.
Back to the future? Not always
AI and machine learning (ML) are inherently backward-looking. To get them working as needed, data scientists must train them on huge amounts of historical data. The problem is that with large, world-changing events like Covid-19, our reality never matches the data used to train our algorithms.
Everything we’re experiencing is unprecedented, unanticipated and unpredictable, and the future is looking hazy at best. We’ve all watched various economists disagree over what “shape” the country’s economic recovery will take. That’s because what we’re experiencing is so out of the ordinary that their models no longer apply.
When a trending topic means impending disaster
But just because AI and ML can’t yet model the near- or long-term future across every domain doesn’t mean we should set them aside. AI and ML are at their best when they’re drawing on existing data to make future predictions. But by truncating those timeframes, we can end up with a pretty solid early warning system.
For example, because ML is so good at identifying statistical changes in patterns of information, we could train models to monitor global newsfeeds for trending terms that might signal a breaking event, such as a sudden rise in regional respiratory illness cases or an impending natural disaster.
AI and humans: Better together
However, while AI can identify spikes in mentions or draw trends based on social posts or news items, their suggestions can’t be taken as gospel. That’s because their modeling exists in a vacuum — there’s no context, causality or qualitative analysis.
To get the most out of these systems, we need to pair them with a human quality assurance team that can decide whether reported data represents a blip, coincidence, misinformation or something that needs to be acted on.
With a human team working in tandem with AI, you get a more solid event detection system. The worst that can happen is that you get a false positive that can then be dismissed or flagged for additional monitoring, which is by far preferable to being late to the party and having to respond reactively.
Let’s hear it for the ‘now’
Just because AI in its current state can’t accurately or feasibly tell us what the long-term impact of Covid-19 on the world will be, we shouldn’t discount its value. Instead of trying to use AI as a crystal ball, we’re better off applying our models to scouring real-time data for informational spikes we can act on. This might mean sending out social distancing or travel alerts in areas where we’re seeing a spike in mentions of related symptoms, or even directing health care resources to potentially affected areas.
By refashioning our AI and ML models to look at the “now,” humans can use these algorithms to make smart, timely decisions that position us to handle whatever the world throws at us in the months to come.