In1998, an engineer in Sony’s computer science lab in Japan filmed a lost-looking robot moving trepidatiously around an enclosure. The robot was tasked with two objectives: avoid obstacles and find objects in the pen. It was able to do so because of its ability to learn the contours of the enclosure and the locations of the sought-after objects.
But whenever the robot encountered an obstacle it didn’t expect, something interesting happened: Its cognitive processes momentarily became chaotic. The robot was grappling with new, unexpected data that didn’t match its predictions about the enclosure. The researchers who set up the experiment argued that the robot’s “self-consciousness” arose in this moment of incoherence. Rather than carrying on as usual, it had to turn its attention inward, so to speak, to decide how to deal with the conflict.
This idea about self-consciousness — that it asserts itself in specific contexts, such as when we are confronted with information that forces us to reassess our environment and then make an executive decision about what to do next — is an old one, dating back to the work of the German philosopher Martin Heidegger in the early 20th century. Now, A.I. researchers are increasingly influenced by neuroscience and are investigating whether neural networks can and should achieve the same higher levels of cognition that occur in the human brain.
Far from the “stupid” robots of today, which don’t have any real understanding of where they are or what they experience, the hope is that a level of awareness analogous to consciousness in humans could make future A.I.s much more intelligent. They could learn by themselves, for example, how to select and focus on data in order to acquire new skills that they assimilate and go on to perform with ease. But giving machines the power to think like this also brings with it risks — and ethical uncertainties.
“I don’t design consciousness,” says Jun Tani, PhD, co-designer of the 1998 experiment and now a professor in the Cognitive Neurorobotics Research Unit at the Okinawa Institute of Technology. He tells OneZero that to describe what his robots experience as “consciousness” is to use a metaphor. That is, the bots aren’t actually cogitating in a way we would recognize, they’re just exhibiting behavior that is structurally similar. And yet he is fascinated by parallels between machine minds and human minds. So much so that he has tried simulating the neural responses associated with autism via a robot.
“Research on consciousness is still considered somewhat taboo in A.I.”
One of the world’s foremost A.I. experts, Yoshua Bengio, founder of Mila, the Quebec Artificial Intelligence Institute, is likewise fascinated by consciousness in A.I. He uses the analogy of driving to describe the switch between conscious and unconscious actions.
“It starts by conscious control when you learn how to drive and then, after some practice, most of the work is done at an unconscious level and you can have a conversation while driving,” he explains via email.
That higher, attentive level of processing is not always necessary — or even desirable — but it seems to be crucial for humans to learn new skills or adapt to unexpected challenges. A.I. systems and robots could potentially avoid the stupidity that currently plagues them if only they could gain the same ability to prioritize, focus, and resolve a problem.
Inspired in part by what we think we know about human consciousness, Bengio and his colleagues have spent several years working on the principle of “attention mechanisms” for A.I. systems. These systems are able to learn what data is relevant and therefore what to focus on in order to complete a given task.
“Research on consciousness,” Bengio adds, “is still considered somewhat taboo in A.I.” Because consciousness is such a difficult phenomenon to understand, even for neuroscientists, it has mostly been discussed by philosophers until now, he says.
Knowledge about the human brain and the human experience of consciousness is increasingly relevant to the pursuit of more advanced systems and has already led to some fascinating crossovers. Take, for example, the work by Newton Howard, PhD, professor of computational neurosciences and neurosurgery at the University of Oxford. He and colleagues have designed an operating system inspired by the human brain.
“When it’s deployed, it’s like a child. It’s eager to learn.”
Rather than rely on one approach to solving problems, it can choose the best data processing technique for the task in question — a bit like how different parts of the brain handle different sorts of information.
He’s also experimenting with a system that can gather data from various sensors and sources in order to automatically build knowledge on various topics. “When it’s deployed, it’s like a child,” he says. “It’s eager to learn.”
All of this work, loosely inspired by what we know about human brains, may push the boundaries of what A.I. can accomplish today. And yet some argue it might not get us much closer to a truly conscious machine mind that has a sense of a self, a detached “soul” that inhabits its body (or chipset), with free will to boot.
The philosopher Daniel Dennett, who has spent much of his life thinking about what consciousness is and is not, argues that we won’t see machines develop this level of consciousness anytime soon — not even within 50 years. He and others have pointed out that the A.I.s we are able to build today seem to have no semblance of the reflective thinking or awareness that we assume are crucial for consciousness.
It’s in the search for a system that does possess these attributes, though, that a profound crossover between neuroscience and A.I. research might happen. At the moment, consciousness remains one of the great mysteries of science. No one knows to what activity in the brain it is tied, exactly, though scientists are gradually working out that certain neural connections seem to be associated with it. Some researchers have found oscillations in brain activity that appear to be related to specific states of consciousness — signatures, if you like, of wakefulness.
By replicating such activity in a machine, we could perhaps enable it to experience conscious thought, suggests Camilo Miguel Signorelli, a research assistant in computer science at the University of Oxford.
He mentions the liquid “wetware” brain of the robot in Ex Machina, a gel-based container of neural activity. “I had to get away from circuitry, I needed something that could arrange and rearrange on a molecular level,” explains Oscar Isaac’s character, who has created a conscious cyborg.
“The risk of mistakenly creating suffering in a conscious machine is something that we need to avoid.”
“That would be an ideal system for an experiment,” says Signorelli, since a fluid, highly plastic brain might be configured to experience consciousness-forming neural oscillations — akin to the waves of activity we see in human brains.
This, it must be said, is highly speculative. And yet it raises the question of whether completely different hardware might be necessary for consciousness (as we experience it) to arise in a machine. Even if we do one day successfully confirm the presence of consciousness in a computer, Signorelli says that we will probably have no real power over it.
“Probably we will get another animal, humanlike consciousness but we can’t control this consciousness,” he says.
As some have argued, that could make such an A.I. dangerous and unpredictable. But a conscious machine that proves to be harmless could still raise ethical quandaries. What if it felt pain, despair, or a terrible state of confusion?
“The risk of mistakenly creating suffering in a conscious machine is something that we need to avoid,” says Andrea Luppi, a PhD student at the University of Cambridge who studies human brain activity and consciousness.
It may be a long time before we really need to grapple with this sort of issue. But A.I. research is increasingly drawing on neuroscience and ideas about consciousness in the pursuit of more powerful systems. That’s happening now. What sort of agent this will help us create in the future is, like the emergence of consciousness itself, tantalizingly difficult to predict.
Original post: https://onezero.medium.com/how-to-give-a-i-a-pinch-of-consciousness-c70707d62b88