Interview with Patrick Lin, director of the Ethics + Emerging Sciences Group at California Polytechnic State University, about autonomous car.
“Computer vision, including algorithms that process images, still need improvement. They still have a hard time picking up small objects, like animals, as well as distinguishing shadows and dark spaces, like a pothole.”
Who should make the moral decisions of autonomous vehicles (AVs): programmers, companies or governments? Is there a consensus on this within the scientific community?
Expertise also matters. For instance, if you were constructing a house, you would want input from professional architects and engineers. Why would that be different with ethical issues? Admittedly, there’s still disagreement among philosophers on what the “right” ethical theory is, but we don’t need that answer for practical ethics – we can borrow methods from different ethical theories to draw out the various considerations. Ultimately, how an autonomous vehicle should be programmed is a political decision that must include buy-in from the general public. This isn’t to say the public is always right – they’re clearly not. But they are a critical stakeholder, and if they’re not on board or consulted, then industry will continue to suffer setbacks.
While automated vehicles have been manufactured and tested for a while now, does the technology even exist that makes it possible to differentiate between people, animals and other traffic participants?
Patrick Lin: Computer vision, including algorithms that process images, still need improvement. They still have a hard time picking up small objects, like animals, as well as distinguishing shadows and dark spaces, like a pothole. They don’t work well at all in the rain and other bad weather, which occurs in most places of the world. That’s why it’s a good idea to have multiple sensing systems, from digital cameras to radars to lidars and other emerging technologies.
Still, they’re surprisingly good and can be made better. They can distinguish a person from a bicyclist from an animal from a car, for instance. And we already know that our laptops and apps have facial recognition technology can identify specific persons; so in theory, robot cars could do that, though it’s unproven at highway speeds with the cameras that these cars have now.
If it’s important for robot cars to identify the various things on the road, we could also create a vehicle-to-vehicle (V2V) communications system, or vehicle-to-infrastructure (V2I), or both (V2X). This could include having tags or transmitters on cars, as well as motorcycle helmets, smartphones if you’re a pedestrian, and so on. But that’s much more work than making more advanced sensors, and it’ would open up more ways for abuse or system failures.
Is policy making possible when the effects of AVs are so unpredictable?
Patrick Lin: Absolutely. We might not be able to foresee all the possible effects, or even effects beyond the near- or mid-term, but that doesn’t mean that all paths forward are equal. We just have to reason as best as we can under these conditions of uncertainty, and more perspectives tend to be better than having fewer – there’s wisdom in the crowd. Anyway, if we don’t proactively make policy, that is a policy itself in that we’re letting “the market” control our fates. But the forces that drive the market – such as efficiency, pricing, branding, and so on – are not necessarily the same forces that promote social responsibility or “a good life.”
For example, if we’re not careful, we could see massive labor disruptions by a robotic workforce, and this would be dangerous if there’s no plan to retrain displaced human workers or otherwise take care of their daily needs. Truck drivers would seem to be among the first casualties, and they represent one of the most popular jobs in the world, as a vital link in our transportation infrastructure. Taxi and other hired drivers may be next, along with traditional auto mechanics who aren’t also computer engineers.
Back to ethical programming, there’s also lots of uncertainty when it comes to a potential crash. But, one way or another, a decision about that programming will be made – either by the programmer and company, or by a more inclusive group that accounts for what the public wants.