Uber fatal crash: Self-driving software reportedly set to ignore objects on road

May 11, 2018 by Levi Sumagaysay, The Mercury News

A pedestrian struck and killed by an Uber self-driving vehicle in Arizona in March may have been ignored as a "false positive" by the car's software.

The setting is meant to overlook certain objects in the path of an autonomous that normally wouldn't be a problem. After an investigation by Uber, company executives believe that setting may have been tuned too far, according to a new report.

Tempe, Arizona police said Elaine Herzberg, 49, was struck outside a crosswalk March 18 by a vehicle that was going about 40 mph and did not brake. The backup driver at the wheel of the self-driving vehicle was seen looking down in a video released by police.

After what is believed to be the first pedestrian death caused by a self-driving vehicle, Uber was banned from testing its cars in Arizona. Its other self-driving hubs are in San Francisco, Pittsburgh and Toronto, although the company does not currently have permission to test such vehicles on public roads in California after letting its permit expire. The rest of its testing is on hold, a spokeswoman confirmed Tuesday.

"We're actively cooperating with the NTSB in their investigation," Uber told the Information, which Monday reported the news about the findings in Uber's investigation. "Out of respect for that process and the trust we've built with NTSB, we can't comment on the specifics of the incident."

The National Transportation Safety Board and the National Highway Traffic Safety Administration are investigating the crash. Also Monday, Uber said it has asked former NTSB Chair Christopher Hart to advise the San Francisco company on safety.

According to the Information's report on Uber's investigation, the may have tuned the self-driving software to not be too sensitive to objects around it because it is trying to achieve a smooth self-driving ride. Other autonomous-vehicle rides can reportedly be jerky as the cars react to perceived threats—that are sometimes non-existent—in their way.

Explore further: Uber gives up autonomous vehicle testing rights in Calif.

18 shares

Related Stories

Crash marks first death involving fully autonomous vehicle

March 20, 2018

A fatal pedestrian crash involving a self-driving Uber SUV in a Phoenix suburb could have far-reaching consequences for the new technology as automakers and other companies race to be the first with cars that operate on their ...

Recommended for you

Printing microelectrode array sensors on gummi candy

June 22, 2018

Microelectrodes can be used for direct measurement of electrical signals in the brain or heart. These applications require soft materials, however. With existing methods, attaching electrodes to such materials poses significant ...

EU copyright law passes key hurdle

June 20, 2018

A highly disputed European copyright law that could force online platforms such as Google and Facebook to pay for links to news content passed a key hurdle in the European Parliament on Wednesday.

36 comments

Adjust slider to filter visible comments by rank

Display comments: newest first

rrwillsj
3 / 5 (6) May 11, 2018
I dunno. Anyone else share the urge to demand that AI vehicle programmers be required to publicly reveal their own driving records?
granville583762
3 / 5 (4) May 11, 2018
This is deliberate with malice aforethought

Of all that has been said in favour of driverless cars they are now programmed to ignore pedestrians in the road.
Some has to go to jail as a life has been lost!
granville583762
3 / 5 (4) May 11, 2018
The driver through no fault of their own!
We have to feel sorry for the driver of this car through no fault of their own were put in the awful position that if they were driving this car as a normal car they would have braked and avoided a fatal collision!

A stark warning for all of us if we ever have to drive these infernal machines!
Eikka
4.7 / 5 (3) May 12, 2018

Of all that has been said in favour of driverless cars they are now programmed to ignore pedestrians in the road.
Some has to go to jail as a life has been lost!


This is exactly what I've been warning about.

You see, there is no right level for the sensitivity setting of the car's AI. However you "tune" it, it will sometimes kill people. This problem comes from the fundamental issue that the AI simply is not intelligent, and its object recognition algorithms aren't powerful enough or reliable enough to drive. It's only just marginally able to drive.

There's a fundamental difference between how the computer sees, and how a person sees. The computer is looking for statistical correlations and that causes it to randomly report to the driving computer things that just aren't there. The more sensitive you set it, the higher the probability of spurious identifications and erratic behaviour, which in itself will cause accidents and kill people just the same.
Eikka
4.8 / 5 (4) May 12, 2018
At the most optimal sensitivity setting for the AI, you're making a compromize between killing people by the car randomly slamming the brakes for no reason, and killing people by ignoring them and driving them over.

Take for example (graphs on page 7.):

http://lear.inria...cv05.pdf

In their case, the probability of detecting a pedestrian from a video feed starts from about 65% with zero false positives, until at about 90% detection rate the false positives suddenly shoot up and the algoritm starts labeling everything a pedestrian.

That means, there's a hard limit where, in order for this algorithm to be useful, it's actually ignoring pedestrians about 10% of the time and there's nothing you can do about that. That's like, if one in ten pedestrians on the road was actually invisible.

Would you dare to drive knowing there's invisible obstacles on the road that nevertheless assume you can see them?

rrwillsj
2.5 / 5 (4) May 12, 2018
The quandary is in the way we abuse our language. Uttering the term 'Artificial Intelligence" and everybody (yes, you too!) is subconsciously visualizing cinematic special effects. On top of classic instinctive paranoia of being supplanted.

We are projecting our fears and our desires onto mechanical objects. That are neither rational nor capable of reasoning. A Logic Tree avoids reasoning about a situation. Neither can such programming induce a rational estimation of consequences.

We. all of us, are really abusing our language. Our communications between people. When we use short-hand, clickbait terms of vague meaning. Such as Artificial Intelligence'.

This problem is very widespread. Consider how to explain phenomena as 'Dark Matter' or 'Black Hole', when so many people disagree over what the hell it is, that they are trying to describe!


granville583762
4 / 5 (3) May 12, 2018
How did they think they were going to get away with it!

I think the car manufacturers are quietly withdrawing their driverless cars. How did they think they were intending to keep it quiet, it's not the same as a few lines of code in Volkswagens diesel gate.

Make an error in driver less car code and its not safe to cross the road. Just think you're the programmer on the team, the old bill has certainly noticed, they have computerised forensic investigate equipment now to deal with crime. The first thing they noticed no skid marks then the speed of the car and they instantly they knew who ever was driving the car did not brake. The car was not driving the programmer was.

There is a person somewhere contemplating their future for some thing that could not be hidden.
Eikka
not rated yet May 14, 2018
How did they think they were going to get away with it!


Self-deception. People are telling you these cars are safer because they haven't -yet- killed as much people as you'd expect from statistics.

As long as you don't look inside the box, it's going to take a long long time to definitively prove that the self-driving cars aren't safe because you have to rack up billions and billions of miles, and thousands of dead people before you get to the law of large numbers and can statistically show that it isn't safe. That's because the manufacturers are going to argue it's not applicable because all those deaths don't come from the same AI ("We changed it last year").

So people like Elon Musk are making a bet: they're pushing the system now knowing it's incomplete, on the hope that they can improve it before the situation gets too bad, or at least jump the golden parachute out as their business crashes.
Eikka
not rated yet May 14, 2018
The reason why the object recognition algorithms in self-driving cars are rather feeble is because they can't afford to spend too much energy to run the computers. A fully stacked modern desktop computer with GPUs up to the gills would consume a kilowatt at full power, which would be a significant portion of the car's fuel consumption.

For example, a car driving along at 40 mph and getting 40 MPG is putting out about 10 kW. That means the "brain" of the car would be consuming an extra 10% fuel. This gets especially significant in electric cars which don't have too much energy to spare in the first place. Hence, there's a limit on what kind of "supercomputer" they can carry along, and how much processing it can do.

This is another effect that people who go on about advances in AI don't take into account: yes, you can run all sorts of fancy stuff in the cloud and achieve superhuman results, but a car can't. The latency to the server is too high and the data rates too low.
Eikka
not rated yet May 14, 2018
And for traditional computer architectures, we're running out of "Moore's law". This year the electronics industry is running 7 nanometer processing. At 5 nm feature size, quantum effects start to dominate, and 3 nm is probably going to be the last manufacturing node where transistors can still work. 1 nm is the ultimate limit because then you're dealing with individual atoms.

So there's probably only going to be one more doubling of transistor density in chips, and that's it. At best you're going to get two doublings or four times the density, and that's it.

Of course you can stack layers of transistors on top of each other, but that doesn't do away with the power demands which is proportional to the feature size and operating speed. Traditional CPUs will have reached their manufacturing limits around 2020 - 2025 before they get powerful enough to run the stand-alone high performance AI with any reasonable energy consumption.

What will replace them is a pie in the sky.
granville583762
5 / 5 (2) May 14, 2018
A bad workman always blames his tools
Eikka> And for traditional computer architectures, we're running out of "Moore's law"

It mentions jerky braking as the reason for removing the ability for the car to see pedestrians in the road.
I believe a few mother boards with I7 processors are sufficient coupled with its own backup battery. It is the bad programmer blaming his tools; we relax our foot on the accelerator and feather the brake's before putting the brake on hard, we never slam our brakes on intermittently. This is the first thing the driving instructor tells on your first driving lesson!

The fact that the programming team saw a solution to jerky braking was to take the obstacles out of the cars visual system so as the car could not see the pedestrians in the road it solve the jerky braking. When the Judge presides over this case, he/she is going to take a very dim view of that intellectual approach.
Eikka
not rated yet May 15, 2018
A bad workman always blames his tools


Even the best carpenter can't work with a blunt blade.

I believe a few mother boards with I7 processors are sufficient coupled with its own backup battery.


That's what they already have, or rather, a motherboard with a handful of GPUs - an i7 is a "linear" architecture processor, not a massively parallel architecture, and thus ill suited for the task.

It mentions jerky braking as the reason for removing the ability


"Jerky braking" is a bit of a misnomer. Rather imagine the car doing a full emergency stop on a busy highway because the movement of a shadow of a tree on the road surface looks a bit like a person to the algorithm. After all, if it really is a person then the car only has milliseconds to react.

Right now they're having real trouble telling a shadow from a pothole from a darker patch of repaired asphalt because they all look same to the computer.
Eikka
5 / 5 (1) May 15, 2018
It is the bad programmer blaming his tools; we relax our foot on the accelerator and feather the brake's before putting the brake on hard, we never slam our brakes on intermittently.


Yes - we. The computer is not us.

The computer lives right in the moment, it has very little object permanence or ability to follow the situation from afar because of the unreliability of the image recognition algorithms. If it remembers an earlier mis-identification, the problem of the computer simply hallucinating things gets worse!

The further away it tries to "see", the less reliable the detection becomes. That's in part a problem of how poor the cameras are. Human visual acquity is about 1 arc-second at the sharpest point, which gives us a theoretical 756 Megapixel vision by swiveling our eyes, with superior dynamic range or the ability to see despite great differences in brightness.

The computer can't handle that much data streaming in through its "eyes".
Eikka
5 / 5 (1) May 15, 2018
The more precise numbers for human vision is about 6 "megapixels" for sharp vision and 125 "megapixels" for peripherial vision spread around the visual field - from the number of light sensitive cells available in the average retina. Our eyes are kinda like two cameras in one - a wide field camera combined with a telescope in the middle, and a powerful algorithm to decide where to point the telescope, with the additional ability to adjust exposure "per pixel" as well as per the whole field, giving us the superior dynamic range.

A regular HD video camera sees about 2 Mpix, and at 60 frames per second it has to deal with half a gigabyte of data per second. Increase the dynamic range from 8 to 12 bits, and you're looking at 720 MB/s. Increase that for two forward facing cameras for stereo vision, and more for backwards facing cameras, side mirrors etc.

The raw data flow is difficult to deal with.
antialias_physorg
5 / 5 (1) May 15, 2018
Anyone else share the urge to demand that AI vehicle programmers be required to publicly reveal their own driving records?

That doesn't really help, because if it's based on neural network architecture 8as most AI software is) then that#s essentially a trained black box.

they are now programmed to ignore pedestrians in the road

No they are not. They are trained to ignore certain objects that are no threat (e.g. falling leaves, snowflakes, small animals, ...) if they weren't they'd continually slam on the brakes at every pebble that rolls across the road. As the article states they had a 'false positive' which means the pedestrian was misclassified as one of these non-threatening objects.
alexander2468
5 / 5 (2) May 15, 2018
The Blind leading the Blind – Driverless Cars
Is there a minimum requirement eye test for driving, you're expected to see a number plate, even people visually impaired are capable of seeing pedestrians in the road.
What has the world come to - sending blind driverless cars on the road without a driving licence; it would not have made the slightest difference if the safety driver was blind.
antialias_physorg
2.3 / 5 (3) May 15, 2018
The quandary is in the way we abuse our language. Uttering the term 'Artificial Intelligence" and everybody (yes, you too!) is subconsciously visualizing cinematic special effects.

That's near constant on here: people read a science article and think that this is like reading a newspaper article.
But in science words have very definite meanings (e.g. artificial intelligence has a very definite meaning that has nothing to do with making anything conscious but is a label for decision making algorithms based on a certain set of programming architectures).

These words are not used to conjure up feelings or supposition beyond what the words actually mean (unlike in newspapers where this is all too often the intent) . If you are not aware of the meaning of these rigidly defined technical terms then all kinds of misconceptions will follow (as can be plainly seen by the uninformed posts of alexander, granville and Eikka in this very thread)
Eikka
5 / 5 (1) May 15, 2018
As the article states they had a 'false positive' which means the pedestrian was misclassified as one of these non-threatening objects.


There's a subtle difference. False positive means both misclassification and detecting objects that aren't there.

And the issue was that the algorithm was tuned down to avoid false positives, which resulted in a false -negative- identification: ignoring what IS there. A pedestrian moving in the middle of the road, even if it's not identified as a pedestrian, should still be identified as a danger because it's big enough to not count as a "snowflake" and its trajectory is intersecting the car's motion - it could be another car or motorbike moving in onto your lane, or a tumbleweed, or a runaway shopping cart.

It isn't strictly necessary to tell what the obstacle is in order to avoid it. It seems that the object just "fell through the sift" and the car couldn't decide what it is, so it decided it was nothing - a fluke.
Eikka
5 / 5 (1) May 15, 2018
These words are not used to conjure up feelings or supposition beyond what the words actually mean


There is the criticism that in the field of AI the word "intelligence" is in popular use by researchers because that's what they wish they were doing, and because it's more impressive towards investors and funding (misleading). This then percolates through to the public media that takes the scientists on their word.

Artificial Intelligence is more properly the field of study, whereas the applications have more specific names like "inference engine" or "expert system", "Bayesian network" etc. depending on their mode of operation.

It's like the difference between being a mathematician, and solving a quadratic equation by algebra. If you choose to call yourself a mathematician because you can find the roots of the equation by applying the formula, you're overselling yourself.

Eikka
5 / 5 (2) May 15, 2018
(as can be plainly seen by the uninformed posts of alexander, granville and Eikka in this very thread)


I find it's rather you who doesn't understand the meaning of "Artificial Intelligence", or rather that you're trying to push a definition that isn't actually true; it has no such rigid and agreed-upon definition.

The problems come from the fact that "intelligent" is poorly defined, and arguably misused. For example, a thermostat can count as an "intelligent agent", though more precisely labeled it is a "reflex agent". Using "intelligent" rather than "reflex" is overselling the thermostat.

The researchers in the field don't quite know what they mean by "intelligent" either. In the broadest sense it means "does something goal oriented", but this is already questionable as the machine agent doesn't have a goal "in mind" - its builders do.

https://en.wikipe...initions
alexander2468
5 / 5 (2) May 15, 2018
antialias_physorg - The Blind leading the Blind – Blind Driverless Cars
If you ever feel that the world is getting on top of you, as you seem to have complete faith in these blind driverless cars, step of the curb when you see one coming or be the safety driver! If the car does not run you down on the curb, as a safety driver it will run into a brick wall.
antialias_physorg
2.3 / 5 (3) May 15, 2018
faith in these blind driverless cars

Nope. I develop software, and I also dabble around in neural network algorithms. So I'm quite aware that there is no such thing as perfect software (and that there can also never be something as a provably perfect neural network classifier)

But what people forget: perfection is not what one should expect before we adopt this stuff. What we *should* expect is that the number of accidents/injuries/fatalities are significantly reduced.
It's like in medicine: When they patent a new anti-cancer drug it doesn't have to cure all cancer all the time but more cancer more of the time than previous methods (if "cures all of disease X all the time" were the standard we would not have a single pharmaceutical drug on the market)

step of the curb when you see one coming

I don't do that when a human driver is incoming - why should I do that in front of an autonomous car? These cars are not a license for pedestrians to become more stupid.
Mayday
not rated yet May 15, 2018
If these vehicles are not detecting pedestrians, how are they detecting motorcycles? I find this revelation to be unreasonably dangerous. Hence forth, I will make an effort to stay clear of any vehicle I suspect could be in self-driving mode.
I suggest that all self-driving vehicles be required to have some sort of visible warning sign, maybe a small orange light in front and back like the chmsl, to help people, especially motorcyclists, stay clear of them.

alexander2468
5 / 5 (2) May 15, 2018
A programmer might be going to jail
antialias_physorg - I develop software, and I also dabble around in neural network algorithms
I don't do that when a human driver is incoming - why should I do that in front of an autonomous car? These cars are not a license for pedestrians to become more stupid.

You need to walk up and down, driving and cycling on Kings Parade and the students are from every corner of the world, you can feel the bikes and pedestrians sliding down the sides of the car.
Let a Blind Driverless Car loose outside Kings and there won't be a skittle left standing.
I hope you realise what you have just proposed, this the very reason why phys.org has aired this article for discussion where a programmer might be going to jail.
TheGhostofOtto1923
3 / 5 (1) May 15, 2018
Self-deception. People are telling you these cars are safer because they haven't -yet- killed as much people as you'd expect from statistics
They are already safer. And they will constantly improve through hardware and software upgrades incorporating lessons learned from experience.

Conversely, human drivers will never improve even as more and more hardware and software systems are added to compensate for their shortcomings.

This is why AI cars are inevitable and why insurance companies will demand them no matter what consumers want.

For instance re the above accident, when AI cars begin sharing info about their environment, people and objects on the street will be tracked and identified by multiple vehicles and traffic cams. That car in the accident will have had a much larger data set over many minutes as to the location, trajectory, and identity of that person.
alexander2468
5 / 5 (2) May 15, 2018
Watch This Space - Blind Driverless Car insurance premiums
TheGhostofOtto1923> This is why AI cars are inevitable and why insurance companies will demand them no matter what consumers want

If consumers do not want Blind Driverless Cars, insurance companies will open to cater for a gap in the market

Insurance companies will have to insure themselves against Blind Driverless Car risk if they insist on only insuring Blind Driverless Cars and the under writers won't take the risk
The insurance companies will find they won't be able to raise the money because the premiums will be too high to pass onto the consumer, the safety driver

You literally could not make this up!

alexander2468
5 / 5 (2) May 15, 2018
Road vehicle insurance and the law
Blind Driverless Cars are already on the road so are insured, and the blind driver of this Blind Driverless Car did not put it brakes on because it was blind, the insurance company cannot pay up the insurance because they told the insurance company the car could see pedestrians in the road, quite a few laws were already broken before the car ran the pedestrian down, now a considerable number of laws have been broken
antialias_physorg
1 / 5 (3) May 16, 2018
Autonomous cars will make different mistakes - because they are aware of their environment in a different way than humans. As long as they make less mistakes I'm all for it.

You need to walk up and down, driving and cycling on Kings Parade and the students are from every corner of the world, you can feel the bikes and pedestrians sliding down the sides of the car.

Driving a car through a parade seems like the thing only a human (and a stupid one at that) would do. An autonomous vehicle would get the info that the street is packed and chose a different route before even getting there.
alexander2468
5 / 5 (2) May 16, 2018
You need to walk up and down, driving and cycling on Kings Parade and the students are from every corner of the world, you can feel the bikes and pedestrians sliding down the sides of the car.

antialias_physorg> Driving a car through a parade seems like the thing only a human (and a stupid one at that) would do. An autonomous vehicle would get the info that the street is packed and chose a different route before even getting there.

You obviously do not know Cambridge, it is too famous for you not to know Kings Parade, it is a paved road where kings college carols are held at christmas, they hold the graduation ceremonies in the senate, is was a main road and is still a road where there is public parking, look it up on street view. Obviously Cambridge is in the real world, is cycling utopia, traffic calming and you must inhabit the other place. You obviously need to enrol on electronics degree and mix with the fellows in one of Cambridge colleges.

alexander2468
5 / 5 (2) May 16, 2018
Extremely flippant attitude
antialias_physorg:- As your mixing in the ethereal atmosphere of Cambridge and we are specifically discussing Blind Driverless Cars and their lethal consequences, you are being extremely flippant with pedestrians lives.
antialias_physorg
1 / 5 (4) May 16, 2018
1) I've been to Cambridge
2) I have a masters degree in EE
3) My PhD involved image processing and feature recognition

So yeah, I know what I'm talking about.

You can always find edge cases where X doesn't work (the attitude of "nothing ever replaces anything")...and yes: you can still find cases where VHS is better than a Blu-ray. But arguing that way is totally pointless. As I said before: the idea is not to get stuff perfect and then change over but to change when it is provably better to do so.
434a
5 / 5 (1) May 16, 2018
Does anyone else get the feeling that phys.org is now home to a small army of blind driverless chatbots being tested in the wild?

How many human users feel the need to preface all their remarks with a title?

Google, just admit that deepmind got bored and set up half a dozen sock accounts on here.
alexander2468
5 / 5 (2) May 16, 2018
As we speak you can see in your minds eye the cars and lorries

antialias_physorg> On phys.org it is extremely rare to ask a fellow commenter to explain themselves "Driving a car through a parade seems like the thing only a human (and a stupid one at that) would do" - The parking in Kings parade is specifically for blue badge parking and loading, so you know how you have drive down kings parade and do a u-turn at the bollards, so why did you say it (and a stupid one at that)

As we speak you can see in your minds eye the cars and lorries squeezing between the bikes and pedestrians at the black bollards by the cathedral.

I dare you to send your Blind Driverless Cars down Kings Parade, antialias_physorg. The hang cuffs will be waiting for you, courtesy of Cambridge constabulary!
granville583762
5 / 5 (2) May 16, 2018
What have we got here antialias_physorg, are you contesting the crown of the troll king competition antialias_physorg! Even Beelzebub has his limits; he waits for old age before he sends out the grim reaper!
434a
5 / 5 (1) May 17, 2018
And as if by magic....

Fail.

https://en.wikipe...ing_test

The Turing test, developed by Alan Turing in 1950, is a test of a machine's ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human. Turing proposed that a human evaluator would judge natural language conversations between a human and a machine designed to generate human-like responses. The evaluator would be aware that one of the two partners in conversation is a machine, and all participants would be separated from one another. The conversation would be limited to a text-only channel such as a computer keyboard and screen so the result would not depend on the machine's ability to render words as speech.


alexander2468
5 / 5 (2) May 17, 2018
It is evident the technology to safely allow cars without human intervention lose on the roads is not yet ready. To allow pedestrians safe passage on public infrastructure, we will just have to bide are time till technology catches up with the human visual and recognition senses nature has endowed us with, then and only then will it be safe the allow driverless cars on the road.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.