Rahwan warned that artificial intelligence shouldn’t be created if it isn’t necessary, as it’s difficult to map the course of its potential evolution and we won’t be able to limit its capacities further down the line.

We may not even know when superintelligent machines have arrived, as trying to establish whether a device is superintelligent compared with humans is not dissimilar to the problems presented by containment.

At the rate of current AI development, this advice may simply be wishful thinking, as companies from Baker McKenzie to tech giants like Google, Amazon, and Apple are still in the process of integrating AI into their businesses — so it may be a matter of time before we have a superintelligence on our hands.

Unfortunately, it appears robotic laws would be powerless to prevent a potential “machine uprising” and that AI development is a field that should be explored with caution.