What if the brakes go out in a driverless car? Does it mow down a crowd of pedestrians or swerve into a concrete wall and sacrifice its passenger?
Researchers at the Massachusetts Institute of Technology are asking humans around the world how they think a robot car should handle life-or-death decisions.
They're finding that many people want self-driving cars to act in the greater good, preserving as much life as possible. But a car programmed to act in the greater good at its passengers' expense is not one they'd like to buy.
The researchers' goal is not just to inspire better algorithms and ethical tenets to guide autonomous vehicles, but to understand what it will take for society to accept these vehicles and use them.
Explore further: Driverless cars: Who gets protected? Study shows public deploys inconsistent ethics on safety issue