Oxbotica, a start-up founded by Oxford University graduates, is using a technology called ‘deepfaking’. First used to create fake internet videos, it employs deep-learning artificial intelligence (AI) to generate thousands of photo-realistic images in minutes.
Two co-evolving AI systems generate the data. One attempts to create the most convincing fake images it can while the other tries to detect which are real and which are not. As the first system improves and learns as it goes on, the detection system will eventually be unable to spot the difference between a real and fake image.
The tech can “expose its autonomous vehicles to the near-infinite variations of the same situation” without any form of real-world testing. For example, the deepfake algorithms can quickly reproduce a scene – for example, a junction – to show the car what it would look like in rain, fog or other adverse conditions.
It’s claimed to be able to reverse road signage and layouts, alter the lighting of a situation and even replace an object (for example, a set of traffic lights) with another (a tree) with faithful accuracy. The synthetic images produced feed the driverless system with thousands of variations of real-life experiences.
Currently, autonomous car testing relies on a massive amount of real-world testing, with major industry figures citing millions of miles covered in multiple countries. However, Oxbotica co-founder Paul Newman said the new system is “the equivalent of giving someone a fishing rod rather than a fish”.
“There is no substitute for real-world testing”, Newman said, “but the autonomous vehicle industry has become concerned with the number of miles travelled as a synonym for safety… You cannot guarantee the vehicle will confront every eventuality. You’re relying on chance encounter. The use of deepfakes enables us to test countless scenarios, which will not only enable us to scale our real-world testing exponentially, it’ll also be safer.”