Interview with Jack Stilgoe, associate professor at University College London, fellow of the Alan Turing Institute, about autonomous car.
“I think the uptake of self-driving cars at scale will require substantial changes to behaviours, infrastructures and the rules of the road, as well as improvements in the technology inside the cars.”
Do you think it will be a long, gradual process to get to the point where autonomous vehicles (AVs) take over the streets, or is it going to happen quite quickly? When do you think it will be an everyday thing to own an AV?
Jack Stilgoe: I think the uptake of self-driving cars at scale will require substantial changes to behaviours, infrastructures and the rules of the road, as well as improvements in the technology inside the cars. This means that the transition will be much slower than the enthusiasts predict. We should remember that companies, particularly start-ups, have little choice but to pretend this will happen quickly because their strategies are short term.
Whether this ends up with a model or private ownership, or one of ride-hailing, or a mix, depends on all sorts of factors. It’s too early to tell.
What kind of technologies (hardware and software) does a perfectly safe AV need and how much of it is available today?
Jack Stilgoe: There will never be such a thing as a perfectly safe technology. We do not even know at the moment how safe is safe enough. Would people accept something that was just a bit safer than human drivers, or would they demand standards like on planes or trains, which are orders of magnitude safer than driving? At the moment, these cars are still learning to drive, and they are creating risks in public. In the US, the companies are deciding for themselves what is acceptable, which jeopardises public trust. If policymakers are really interested in safety, they should be trying to take control of these experiments.
How is the data required for deep learning gathered? On the roads, virtually or both?
Jack Stilgoe: Most companies are using a mix of real-world and simulated learning. But we should not pretend that a car can be considered safe just because it has travelled x thousand miles either on the road or in a simulator. It’s the edge cases that matter – the moments when something new happens. What should a car do in those circumstances?
There are also many ethical issues surrounding AVs. Will they be able to perfectly differentiate between traffic participants, and is the prioritization of lives an ethical thing?
Jack Stilgoe: I think this sort of trolley problem thinking is a distraction. If you look at the Uber fatality in March 2018, the real ethical choice there was a decision to privilege false positives over false negatives. Self-driving cars will not be omniscient and they will not sort out difficult political decisions on our behalf. I don’t think that we should pretend we can entrust morality to machines. I think we need to think about the responsibilities of designers.
The real ethical questions around self-driving cars are things like: how safe do they need to be? Who will benefit from them and who will lose out? And then there is the political question: who should decide?