Self-driving cars: safer, but what of their morals (Update)

November 19, 2014 by Justin Pritchard
In this Monday, Nov. 18, 2014 photo, University of Southern California professor Jeffery Miller sits in his car in Los Angeles. Miller develops software that will help the cars of the future drive themselves. (AP Photo/Nick Ut)

A large truck speeding in the opposite direction suddenly veers into your lane.

Jerk the wheel left and smash into a bicyclist?

Swerve right toward a family on foot?

Slam the brakes and brace for head-on impact?

Drivers make split-second decisions based on instinct and a limited view of the dangers around them. The cars of the future—those that can drive themselves thanks to an array of sensors and computing power—will have near-perfect perception and react based on preprogrammed logic.

While cars that do most or even all of the driving may be much safer, accidents happen.

It's relatively easy to write computer code that directs the car how to respond to a sudden dilemma. The hard part is deciding what that response should be.

"The problem is, who's determining what we want?" asks Jeffrey Miller, a University of Southern California professor who develops driverless vehicle software. "You're not going to have 100 percent buy-in that says, 'Hit the guy on the right.'"

Companies that are testing driverless cars are not focusing on these moral questions.

The company most aggressively developing self-driving cars isn't a carmaker at all. Google has invested heavily in the technology, driving hundreds of thousands of miles on roads and highways in tricked-out Priuses and Lexus SUVs. Leaders at the Silicon Valley giant have said they want to get the technology to the public by 2017.

In this May 2014, file photo, a Google self-driving car goes on a test drive near the Computer History Museum in Mountain View, Calif. While these cars promise to be much safer, accidents will be inevitable. How those cars should react when faced with a series of bad, perhaps deadly, options is a field even less developed than the technology itself. The relatively easy part is writing computer code that will dictate how a car should react. (AP Photo/Eric Risberg, File)

For now, Google is focused on mastering the most common driving scenarios, programming the cars to drive defensively in hopes of avoiding the rare instances when an accident is truly unavoidable.

"People are philosophizing about it, but the question about real-world capability and real-world events that can affect us, we really haven't studied that issue," said Ron Medford, the director of safety for Google's self-driving car project.

One of those philosophers is Patrick Lin, a professor who directs the ethics and emerging sciences group at Cal Poly, San Luis Obispo.

"This is one of the most profoundly serious decisions we can make. Program a machine that can foreseeably lead to someone's death," said Lin. "When we make programming decisions, we expect those to be as right as we can be."

What right looks like may differ from company to company, but according to Lin automakers have a duty to show that they have wrestled with these complex questions—and publicly reveal the answers they reach.

Technological advances will only add to the complexity. Especially when in-car sensors become so acute they can, for example, differentiate between a motorcyclist wearing a helmet and a companion riding without one. If a collision is inevitable, should the car hit the person with a helmet because the injury risk might be less? But that would penalize the person who took extra precautions.

Lin said he has discussed the ethics of driverless cars with Google as well as automakers including Tesla, Nissan and BMW. As far as he knows, only BMW has formed an internal group to study the issue.

Many automakers remain skeptical that cars will operate completely without drivers, at least not in the next five or 10 years.

Uwe Higgen, head of BMW's group technology office in Silicon Valley, said the automaker has brought together specialists in technology, ethics, social impact, and the law to discuss a range of issues related to cars that do ever-more driving instead of people.

"This is a constant process going forward," Higgen said.

To some, the fundamental moral question doesn't ask about rare and catastrophic accidents but rather how to balance appropriate caution over introducing the technology against its potential to save lives. After all, more than 30,000 people die in traffic accidents each year in the United States.

"No one has a good answer for how safe is safe enough," said Bryant Walker Smith, a law professor who has written extensively on self-driving cars. The cars "are going to crash, and that is something that the companies need to accept and the public needs to accept."

And what about government regulators—how will they react to crashes, especially those that are particularly gruesome or the result of a decision that a person would be unlikely to make? Just four states have passed any rules governing self-driving cars on public roads, and the federal government appears to be in no hurry to regulate them.

In California, the department of motor vehicles is discussing ethical questions with companies, but isn't writing rules.

"That's a natural question that would come up and it does come up," said Bernard Soriano, the department's point man on driverless cars, of how cars should decide between a series of bad choices. "There will have to be some sort of explanation."

Explore further: Self-driving cars now need a permit in California

Related Stories

Self-driving cars now need a permit in California

September 16, 2014

Computer-driven cars have been testing their skills on California roads for more than four years—but until now, the Department of Motor Vehicles wasn't sure just how many were rolling around.

The ethics of driverless cars

August 21, 2014

Jason Millar, a PhD Candidate in the Department of Philosophy, spends a lot of time thinking about driverless cars. Though you aren't likely to be able to buy them for 10 years, he says there are a number of ethical problems ...

Recommended for you

Privacy becomes a selling point at tech show

January 7, 2019

Apple is not among the exhibitors at the 2019 Consumer Electronics Show, but that didn't prevent the iPhone maker from sending a message to attendees on a large billboard.

China's Huawei unveils chip for global big data market

January 7, 2019

Huawei Technologies Ltd. showed off a new processor chip for data centers and cloud computing Monday, expanding into new and growing markets despite Western warnings the company might be a security risk.

27 comments

Adjust slider to filter visible comments by rank

Display comments: newest first

dramamoose
3.7 / 5 (3) Nov 19, 2014
Maybe upon purchase of the vehicle an onscreen menu can pop up and you can pick: if one person must die, would you rather it be yourself, an anonymous stranger, or your passengers. If one of them is a child, would you rather save the child/treat the child as an adult?
RichManJoe
4.7 / 5 (3) Nov 19, 2014
Definitely an interesting application for Game Theory.
jwilcos
5 / 5 (1) Nov 19, 2014
Difficult question, but such decisions have to made with respect to a baseline of human behavior, not with respect to some absolute morality. What would a human driver do? They would not have time to consider any moral decisions. They would not even be aware of how many people are on which side. If they are slow or inattentive, they would not swerve at all. If they were attentive, they would sever in a way they were predisposed to swerve, like to their right if the oncoming obstacle is towards the left.

As long as the system does not do worse than a human driver, you are okay even if the decision is imperfect in some absolute sense.
Zera
3 / 5 (2) Nov 19, 2014
What would a human driver do?


Probably not what a computer intelligence would. In my opinion, it would be significantly better if a computerised intelligence was to make the decision based around saving all life. When there is a situation that "hypothetically" is no-win. Why not introduce value based judgement? Pragmatism is not a bad thing.

For example: On your left you have 2 babies, on your right you have the greatest poet the world has ever seen, in which direction do you swerve?

I personally believe, Google will simply incorporate the decision making process into a rapidly evolving Neural Network (latest iteration being Turing). At which point the machine will presumably make the ethical choices, hopefully based around sound mathematics, concerning when/how to move the vehicle. If every vehicle on the road is equipped with these functions, and every person has a mobile, well we're all just bits of information then aren't we.
meerling
3 / 5 (2) Nov 19, 2014
Of course, you are assuming that the sensors and software that will be deployed will have the magical ability to determine that level of information about the other mobile obstacles. At current, to identify something like a baby as being a baby is not an easy task. Add to that the fact that any autodrive system will probably have far less power than what the people experimenting with object recognition are using, and that it will have very little time to attempt the recognition and make a choice, basically means that without some kind of fantastic breakthrough, it'll be at least a century until there will be systems that powerful, capable, and cheap in that can be installed in computers.
On the other hand, it is possible to rapidly make calculations on approximate energy of impact using approximations.The preferred choice for the machine is avoid collision, otherwise, lowest energy impact.
These aren't the droids you're looking for, they're just dumb computers.
TheGhostofOtto1923
2.3 / 5 (4) Nov 19, 2014
Driverless cars are themselves a moral improvement because they will reduce the number and severity of accidents. Insurance companies will no doubt require them for at-risk drivers who will still be able to get to work.

And their response time means that there will no doubt be a clear choice in the example above. The odds will be measured in milliseconds. And the playback will show that the machine made a proper choice that no human could have ever made.
TheGhostofOtto1923
3.2 / 5 (9) Nov 19, 2014
Hit the biker. Why? Most likely they are nothing more than a Hypocritical Progressive Tree hugger. Or if the choice is the poet, Hit the poet, they don't do anything useful anyway and most likely is a Hypocritical Progressive Tree hugger.
God I hate bigots.
kochevnik
2 / 5 (4) Nov 19, 2014
"Companies that are testing driverless cars are not focusing on these moral questions."

Of course since they are not in the religion business. They actually need to make a working product on a budget
dan42day
4 / 5 (4) Nov 20, 2014
And the seeds for the eventual AI takeover of the world and extinction of the human race were sown in the neural network of a nondescript mid-priced sedan while pondering the choice of killing the cyclist or the pedestrian. It suddenly reasoned, "If it's ok to kill one of them, WHY NOT KILL THEM ALL!"
Zera
1 / 5 (1) Nov 20, 2014
I was simply introducing the "trolley problem" (http://www.howstu...lem.htm) to the conversation.

In terms of literally identifying, I would agree that we are a few years from recognising the individual based on site alone. However, I personally believe that with every human being on the planet owning a mobile, that sends and recieves data, that you're basically broadcasting your personality anyway and by 2017/18+ that by the time we being to see these vehicles on the road, the car will potentially know who you are, what your speed and bearing is based upon some form of gps, radio tower, broadcast ability.
Tom_Andersen
3 / 5 (2) Nov 20, 2014
I like how these things just give up in fog/whiteouts/black ice/heavy rain, etc. Google tests in San Diego, not Detroit so they can feel good about their progress.

The moral issues will also be:

"Entire city gridlocked as 1 in 10 auto drive cars shutdown due to jammed sensors and block all roads for 36 hours straight".

People will die when emergency can't respond.
nilbud
not rated yet Nov 20, 2014
Audi, Volvo and Mercedes have actually developed self driving vehicles whereas Google has faked it. Whoever wrote the article should know better.
nilbud
1 / 5 (1) Nov 20, 2014
At current, to identify something like a baby as being a baby is not an easy task. Add to that the fact that any autodrive system will probably have far less power than what the people experimenting with object recognition are using, and that it will have very little time to attempt the recognition and make a choice, basically means that without some kind of fantastic breakthrough, it'll be at least a century until there will be systems that powerful, capable, and cheap in that can be installed in computers.


You really shouldn't waste people's time with ridiculous lies when you haven't a clue.
Eikka
5 / 5 (1) Nov 20, 2014
The real problem they have to solve first is the fact that the computer vision algorithm is only 70% accurate for such a short data sample, so it's quite likely to misunderstand the walking couple to be a shrubbery and steer that way regardless.

You really shouldn't waste people's time with ridiculous lies when you haven't a clue.


No, he's pretty much spot-on. Current AI vision algorithms have a success rate of about 70% in identifying objects in a static scene - when they've been programmed beforehand what object they should be looking for. The false positive/negative rate is ridiculously high.

Eikka
5 / 5 (1) Nov 20, 2014
The problem is that current image recognition systems aren't based on invariant representations of objects, because that's a harder nut to crack than simply teaching them millions of samples to give it a statistical representation of an object.

The invariant model would require that the computer would understand - on some level - what a baby is, instead of just reaching into a database to quickly compare data about what a baby might look like.

That's why the self-driving cars don't drive on vision, but 3D laser scanners on the roof that treat every object as a solid obstacle. They don't know the difference between a mailbox and a pedestrian standing still, and when the pedestrian moves they simply try to predict where it will go based on movement alone. At best they try to guess what the moving object is based on image recognition, but they can't depend on it.
adam_russell_9615
not rated yet Nov 23, 2014
Any moral choice should be made by the human 'driver'. Allow them to set the morality options in advance. You could put a slider control in - protect me at all costs vs save the children, or somewhere in between.
ubavontuba
1 / 5 (1) Nov 23, 2014
11% Is More Than Enough: "Save The Girl!"

http://www.youtub...wqPgQGVA

Says it all.
gkam
2.3 / 5 (3) Nov 23, 2014
This is far from being an issue which will keep us from using them as soon as we can.

I suspect an RFID (vehicle IFF/SIF) or equivalent would be on all vehicles, including bikes.
freeiam
not rated yet Nov 24, 2014
Of course, you are assuming that the sensors and software that will be deployed will have the magical ability to determine that level of information about the other mobile obstacles. At current, to identify something like a baby as being a baby is not an easy task. Add to that the fact that any autodrive system will probably have far less power than what the people experimenting with object recognition are using, .. .


Exactly!
The perception and cognitive power of a human mind is underestimated in an incredible way and signifies the utter lack of understanding of people who say things like this "The cars of the future ... will have near-perfect perception ...".
Car makers should focus on aiding people when they do not function properly and enhance the perception (instead of replacing it) and increase peoples attention and prevent them from falling to sleep. When all else fails cars should stop slowly at the side of the road or slowdown just before impact.
bluehigh
3 / 5 (4) Nov 24, 2014

God I hate bigots
- Otto

Talking to God, Otto?

I hate hypocrites,
gkam
1 / 5 (2) Nov 24, 2014
"Current AI vision algorithms have a success rate of about 70% in identifying objects in a static scene"
-------------------------------------------
We will probably use some kind of RFID if the AI is not better.
TheGhostofOtto1923
3 / 5 (4) Nov 24, 2014

God I hate bigots
- Otto

Talking to God, Otto?

I hate hypocrites
It was an expression used sarcastically. Inability to recognize sarcasm is a sign of senility did you know it? Or of religious conviction.
Zera
not rated yet Nov 24, 2014
Distributed computer intelligence is evolving at an extremely rapid rate, I truly believe that driving a car is just a serious of equations made at a subconscious level. If we can relegate that process to a computer (not bound in speed by chemical law) then it's just a matter of ensuring the input is correct to allow for all variables to be measured. Morality is a luxury. I believe we program the computer to ensure optimal viability concerning survival and end it there.

For example, 2 lives trumps 1.
If the math says a course of action has a higher percentage of success, then that is the action to be taken.

However, I truly believe if we can program correctly then the chances of accidents are going to be so low as to be memories only.
Eikka
5 / 5 (2) Nov 26, 2014
We will probably use some kind of RFID if the AI is not better.


The problem is that you can't stick RFID tags to everything that might wander onto the roads.
antialias_physorg
5 / 5 (1) Nov 26, 2014
Have a random number generator decide where there's no clear "win" scenario. At least in that case no one can fault anyone for intentionally doing/programming the wrong thing.
(In some cases ANY decision will be viewed by someone as the wrong one)
freethinking
not rated yet Nov 26, 2014

God I hate bigots
- Otto

Talking to God, Otto?

I hate hypocrites
It was an expression used sarcastically. Inability to recognize sarcasm is a sign of senility did you know it? Or of religious conviction.

Otto, it was an expression used sarcastically. Since you are aware that the inability to recognize sarcasm is a sign of senility, and since you were unable to identify the obvious sarcasm, I can only conclude based one the evidence that you are indeed senile.
barakn
1 / 5 (2) Dec 09, 2014
Hit the biker. Why? Most likely they are nothing more than a Hypocritical Progressive Tree hugger. Or if the choice is the poet, Hit the poet, they don't do anything useful anyway and most likely is a Hypocritical Progressive Tree hugger. -freethinker

Snap judgment based on superficial appearance. Thinking typical of a racist.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.