I guess there’s always a third option, the one Elon Musk worries about. That is, of course, Skynet.
Or more viscerally, Terminator.
But AI is already here, even if it’s not self-aware. And we’re adding more of it every day to our homes, our offices, our workplaces, and our cities. The valid question is: what will AI do for us? Will it replace us, as we’ve seen automation do in factories? AI radiologists, for example, are already 11.5% better than human radiologists.
Or will it enhance our capabilities?
For instance, an AI radiologist is unlikely to be able to converse with a patient, question the source of the data it is looking at, seek tangential possibilities, or interface with doctors. But a radiologist using AI could be smarter and faster, freeing up more time for both deeper human connection and additional work.
Slater Victoroff, AI developer and CTO of a process automation startup Indico, thinks AI will be our friend. And co-worker.
And, essentially, bionic arm: you’re in charge, but AI is a force multiplier.
Not, in other words, our replacement.
“One mode is what I would call the Android mode of thinking, which is [that] an AI worker is going to come in and sit next to me and it does some portion of work of a human worker or something along those lines, right,” Victoroff told me on a recent episode of the TechFirst podcast. “And I think that stands in pretty stark opposition to what I personally prefer, which is the bionic arm notion of AI. It’s the idea that this is fundamentally a tool like any other, and you can lift 10 times as much, right, you can do your work 10 times better. But it’s still fundamentally you doing the work.”
That’s a comforting thought.
Listen to our conversation on the TechFirst podcast:
And there’s clearly examples where that happens. A trivial example: I can do more, faster, when I hand off simple math problems to Siri on my iPhone. A more compelling workplace example is a lawyer who can use AI to serve more people by offloading repetitive tasks for simple goals onto a smart system.
And it’s generally clear in those cases that while there’s a smart system involved, it’s not primarily responsible for the outcomes.
“When we’re talking about an AI lawyer, it’s important that we’re not actually talking about an AI lawyer, right?” Victoroff says. “What we’re talking about is a series of very, very powerful tools that allow one lawyer to serve a hundred people in a really cursory way, as opposed to three people in a really detailed way.”
“Nobody’s going to say that [they] want an AI lawyer representing [them] in an actual court case … correct?”
Essentially what he’s saying is that there’s no magic. No genie in the bottle. And no super-smart Jarvis to take orders and almost magically fulfill them, as we see in the Iron Man movies.
Interestingly, Victoroff talks about the black box problem.
Not for AI, which is what you might expect, but for people.
The AI black box problem, of course, is the idea that we don’t know why AI made a decision, we just know it made one. This is dangerous: literally days ago a Facebook AI recommendation engine clearly thought that black men in videos were primates. Google had a similar problem in 2015. We obviously need better processes to vet the results of AI efforts to ensure accuracy and reduce bias.
But there’s a human black box problem too.
“We don’t talk as much about black boxes in human processes … which is actually kind of what these things are, right?” Victoroff says. “They’re highly non-transparent.”
One example: loan applications.
A people-driven process might look like a well-oiled machine from the outside, but in reality, it’s a piece of paper hitting Bob’s or Betty’s desk, and a decision being handed down, and there it goes. Both humans and AI black boxes exist, and that’s a challenge in an environment where we want to build a fairer and better world.
For Victoroff, there’s no magic.
There’s just repetitive learning and decisions. AI, he says, is fundamentally a mirror. Which means that if we have bias in our culture, we’ll inevitably have bias in our AI. And that’s not a comforting thought.
What is, however, is that there might be a future in which we are not replaced by smarter machines, as Elon Musk has publicly mused. But that ever-smarter computer systems can help us do the things we need to get done.
“If the job is mimicking a human, it’s never going to be as good as a human, right?” says Victoroff. “I think that the arc of history on this one is very, very clear … generally what you see with automation is an increase in the amount of work that you’re able to do.”