Tesla working on Autopilot radar changes after crash

Tesla Model S
Tesla Model S

Tesla Motors is working on modifications to its Autopilot system after it failed to stop for a tractor-trailer rig in a Florida crash that killed the driver of a Model S sedan.

CEO Elon Musk, in a Twitter post Thursday night, said Tesla is working on improvements to the radar system. Autopilot uses cameras, radar and computers to detect objects and automatically brake if a Tesla vehicle is about to hit something.

But in the May 7 crash that killed Joshua D. Brown, 40, of Canton, Ohio, cameras in his Tesla Model S failed to distinguish the white side of a turning tractor-trailer from a brightly lit sky, and the car didn't automatically brake, the company has said. Signals from radar sensors also didn't stop the car, and Brown didn't take control either.

Tesla wouldn't comment Friday on Musk's tweets or possible changes to Autopilot, which is being scrutinized by two U.S. government agencies. Whatever changes are made have broad implications for Tesla and other automakers, who either have similar technology in place or are about ready to put it on the road as they move toward fully autonomous driving within the next decade.

Just after the crash was made public June 30, Musk gave an indication in a tweet that the radar was discounted in the Florida crash. His tweet, which since has been removed from Twitter, said that radar "tunes out" objects like an overhead road sign to avoid stopping the car for no reason. Experts say this means that the radar likely overlooked the tractor-trailer in the Florida crash.

Thursday, Musk tweeted that the company is working on changes that would "decouple" the Autopilot's radar from its cameras and allow the radar to spot objects with fewer data points. Car sensors produce so much data that computers can't process it all. So fewer data points are needed for self-driving systems to work.

Experts contacted by The Associated Press say it's clear that Musk is focusing on the radar so his cars spot tractor-trailers in similar circumstances. "It kind of strikes me that they're figuring out how to solve that problem," said Timothy Carone, an information technology and analytics professor at the University of Notre Dame business school.

Radar can see through bright sunlight, rain, snow and other things that can block the sight of cameras, so it makes sense that Tesla would try to emphasize radar more after the Florida crash, said Raj Rajkumar, a computer engineering professor at Carnegie Mellon University who leads its autonomous vehicle research.

The cars' software would have to be updated so it considers the radar data and determines if obstacles are in the way, Rajkumar said.

Tesla has said that it constantly updates the Autopilot system as the company takes in data from cars that are on the road.


Explore further

US probes second suspected Tesla Autopilot crash

© 2016 The Associated Press. All rights reserved.

Citation: Tesla working on Autopilot radar changes after crash (2016, July 16) retrieved 19 October 2019 from https://phys.org/news/2016-07-tesla-autopilot-radar.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.
570 shares

Feedback to editors

User comments

Jul 16, 2016
But in the May 7 crash that killed Joshua D. Brown, 40, of Canton, Ohio, cameras in his Tesla Model S failed to distinguish the white side of a turning tractor-trailer from a brightly lit sky, and the car didn't automatically brake, the company has said. Signals from radar sensors also didn't stop the car, and Brown didn't take control either.


It is instructive that after the crash and continuing since, the observation is made that Joshua Brown did not see the transfer truck or brake the car, implying that the autopilot functioned as well as a human. But of course, Brown was not driving and was looking at a movie at the time of the crash. If Brown had been driving, he would doubtless have seen the transfer truck and implemented avoidance measures.

There is a strong desire in many to pretend that these systems do not add additional dangers to road transportation.

Jul 16, 2016
If radar sensors will automatically cause this and future cars to brake, police should integrate radar emitters into stop sticks. A seemingly solid wall ahead of you on the road will make the car stop, even if you don't want it too.

Jul 16, 2016
Jeffhans1,

There are multiple ways that auto braking cars can kill you. At a minimum, these so called improvements to an automobile should come with a switch to disable them.

Jul 16, 2016
greenonions,
The systems do come with a switch to disable them.


Really? That is good news. Wonder why I have never heard of it. Do all the cars with self braking systems have a disable switch?

Jul 16, 2016
greenonions,

So Tesla does not have the capability to disable the auto braking system. You said that systems come with a switch to disable the auto braking system. Which systems actually have a switch to disable the systems?

why would you want to disable a safety feature?


Because it can cause accidents which could otherwise be avoided or make unavoidable accidents worse because they wrest control from the driver.


Jul 17, 2016
RM07,
What are you trying to say -- that autopilot (which is currently just lane-keeping and traffic aware cruise control) is adding dangers and thus "more" dangerous than human driving?


Not trying to say it. I think I said it clearly. Do you think that a competent driver would have driven his vehicle under a transfer truck at high speed? By all accounts, Joshua Brown was a competent driver, but he let the Tesla drive for him and died because of that decision.

Systems which interfere with the capacity of the driver to maintain attention and control of the vehicle are inherently less safe than a human driver who is maintaining attention and control.

Jul 17, 2016
None of this is rocket science, which funnily enough, is his other venture and one that they appear to be quite good at.

Jul 17, 2016
So it's reflection, was there a moment of sentient or a confused programmer,i .e. visual data, and radar. At speed? My excuse, else would it simply look at itself? Too much light? Either way I'm at speed, with calibrated radar, visual, 'eh where was the laser? Backup, computational, ... sonar? ,,, thermals? Physics, somebody had to try to be clever. I know, else incompetent. Expected possibilities based upon the data, all degrees of freedom. Characteristics from expected minimal to some maximums. Close to all reality. A simple multi dimensional control space. Simple, can't screw that up. Just protect the bubble, i.e safety bubble, within control space, command and actuality. Should not be able to force an error, like driving into services. Throw in spectral decoding to substance with audio to define quantity or ... I'd give you everything at cost to design with my A$$ well protected. It must do what? What the hell is what? juz say'n

Jul 17, 2016
i see bafooning

Jul 17, 2016
indeed Puma you see right:

Aaaah monkeyyyyy spent his whole weekend glued to his pc screen (as always), dogfart (antigoracle monkey sock) pressing buttons on his keyboard that does not make sense to him, resulting in dumb opinions

Jul 17, 2016
Car sensors produce so much data that computers can't process it all. So fewer data points are needed for self-driving systems to work.


No. Absolutely no.

A better computer is needed to handle all the data, instead of paring down the data to fewer points for the feeble AI on-board to handle. The more you "filter" it, the more these kinds of accidents happen because the car becomes dumber and dumber.

If the complaint is that the computer can't track everything around it, the answer is not to give it less to look at because the world is still there even if you put blinders on.

Jul 17, 2016
greenonions,
Tesla has the ability to disable the system - I showed you that.

No, you said it did, then you said:
Yes the Tesla automatic braking can be disabled temporarily - it will reengage after stopping - and then restarting the car.

So the Tesla cannot be disabled.

Are all disable switches temporary or do any of them actually work? It is important to know that such a dangerous systems can be disabled since auto manufacturers seem intent on forcing them into new cars.

If the disable switch has to be continually switched, it is not really a disable switch. It just makes a permanent danger irritatingly random.

Jul 17, 2016
This comment has been removed by a moderator.

Jul 17, 2016
greenonions,
where is your evidence that these systems are "dangerous" - your word.


First, your so called evidence that they are safer is simply a declaration by nhtsa.

Second, you are not without reasoning ability. AEB systems are dangerous whenever the road conditions have low friction (water, ice, snow). They are dangerous when they prevent the driver from driving out of a pending accident because locked brakes remove the ability to drive the vehicle. They are dangerous when the driver should accelerate out of a pending accident because the AEB is hitting the brake when the driver is trying to accelerate.


Jul 17, 2016
greenoions, continued

An example. Many years ago I was driving with my family on a four lane road in a city. I was in the right hand lane when a car with four girls in the left hand lane pulled across in front of me to enter a car dealership. The accident was unavoidable. I braked enough to slow the car without losing control and guided my car to hit the other car between the front and back seat. No one was injured in the accident. If I had had AEB on my car, one or more of the people in the other car would have been seriously injured or dead.

rgw
Jul 17, 2016
Traffic lights did not end intersection crashes, such lights lowered the risk. Lane lines did not end errant driving, the lines lowered the risk. Divided, limited access highways did not end crashes, these improvements lowered the number and severity of collisions. Traffic deaths per capita are about half of 50 years ago. If you do not see the trend, stop reading here.

Automated traffic will not end accidents, at least not on day one; but assume the improvement is 'only' 10%. That would be 3500 FEWER American dead each year and 10s of thousand fewer injuries. Eventually all vehicles will be connected and even mistakes like the supposed non identification of a semi trailer will end. Reasonable estimates of the lives saved is in the realm of 75%. That is 22,500 useless preventable deaths eliminated annually.

rgw
Jul 17, 2016
If you are a myopic, self absorbed luddite whose postings reveal an extreme infatuation with a personal and blatantly ludicrous sense of superiority, no amount of reason or experience will alter the depths of your ignorance. Unlike traffic control, stupidity is an unimprovable life sentence.

Jul 17, 2016
rgw,

Will we ever have systems which are actually safe and effective? Perhaps, perhaps not.

We we have now is seriously not safer or more effective than an alert, competent driver. Forcing these systems on alert, competent drivers can only result in accidents and death where the driver could have prevented the accident or death.

rgw
Jul 17, 2016
dogbert,
I drove in a category I'll describe as 'Transporter' for a couple of decades. I was very 'liberal' with my interpretation of traffic regulations, even though getting stopped was not a viable option. I had neither accidents nor even a single speeding ticket. In the uncounted numbers of vehicles I encountered on these long ago adventures, I can agree that most drivers are most times 'competent'.

At the risk of denigrating Lincoln's memory, I can also say that all drivers are sometimes incompetent and a significant minority should not be allowed anywhere near a steering wheel - ever. If automated control systems show evidence of fatigue, drunkenness, aggravation etc, then I will agree with your belief system. Unfortunately your conviction is based on faith, which, by definition, is the absence of proof.

Jul 17, 2016
RM07,

Tesla systems are limited in what they can do and when they can do it. If you subtract all the times human drivers had accidents in conditions that Tesla cannot drive at all, your statistics would be far different.

Joshua Brown would likely be alive and well today if he had not trusted Tesla. He is dead because Tesla did not see a transfer truck across the road as a danger.

Jul 17, 2016
greenonions,

I do know how to Google and it is certainly not difficult to find opinions which support AEB systems. The two you linked to:

Mercedes Benz saying their AEB system reduces accidents. Mercedes Benz has been found by the U.S. government to have rigged their systems to report that their deisel engines were cleaner than they are. Not a reliable source.

The other link relies on meta-analysis, which is generally used to promote an idea when the data do not support that conclusion.

It is notable that you have provided no evidence to support your position


Links to someone else's opinion does not prove your opinion. If I were to link to opinions congruent with mine, those links would be similarly useless as yours.

I provided situations where AEB would cause accidents and elaborated with a real world example.


Jul 17, 2016
greenonions,
You are of course free to do as you choose


If all cars are to be equipped with AEB systems and such systems are rigged so that they cannot be disabled, then I don't really have a choice to not use cars with such systems. Which is why I point out their danger.

Jul 17, 2016
rgw,

Will we ever have systems which are actually safe and effective? Perhaps, perhaps not.

We we have now is seriously not safer or more effective than an alert, competent driver. Forcing these systems on alert, competent drivers can only result in accidents and death where the driver could have prevented the accident or death.

It's simple real time matrix with site better than any human, reaction time in nanoseconds. You simply need to satisfy the expected. " .. a turn-off, in front of me, an object that may choose ..?.." I get it. I just like my preparedness, 'cause my logic defines the safety bubble. Worst case is a protected passenger, that's my worst case. Think!

Jul 17, 2016
Future, we can literally take another route, infrastructure, why allow ...

Jul 17, 2016
greenonions,
When all the engineers at the automakers determine that net net this is a smart way to go - they know more than you do.


You obviously don't understand why the auto makers are rushing to install AEB systems. They are rushing to install them before the government forces them to do it along with regulations on how they must do it. They are trying to avoid the regulatory system and the costs associated with such regulation.

Jul 17, 2016
Anyway, after this conversation, enter the 21st century; 20th century solutions are not 21st! I don't even think you have a buss connection. juz say'n Oh we don't use wire anymore, get out!

Jul 18, 2016
This comment has been removed by a moderator.

Jul 18, 2016
The media will be extremely bias in cases such as this as they always are.
We will get to hear about every single fatal crash caused by driverless tech, and yet I doubt we'll hear anything at all about all the people it saves.
How many people has it saved already in fact? Surely some?

Jul 18, 2016
We will get to hear about every single fatal crash

That depends. Media rate what they print according to buzzwords (i.e. how 'click-baitey' they can make it). In this case there's quite a few buzzwords in there: 'Tesla', 'autopilot', 'death' (if it bleeds it leads).
There's no real relationship between how important something is and whether it makes the news nowadays.

How many people has it saved already in fact? Surely some?

"Number of people statistically saved" just isn't click-baitey enough.

In the end people can always decide whether they buy this feature and whether they switch it on or not. And if they do switch it on then they had better first read the instructions manual.

Maybe insurance companies will give bonuses to those who switch it on - as they are purely statistics driven.

Jul 18, 2016
oh, i don't know about all the dangers, simply do not accept stupid commands, or verify sender, in car, not in car, authorized, not authorized. But I like the selective commands, if you want a ram, get another device, the monitor replies.

Jul 19, 2016
Perhaps this article isn't very recent, for did not Tesla announce that their analysis revealed that the autopilot device was not 'On' ...??

So, shall we expect an announcement distancing their new research from the matter of a faulty autopilot?

Jul 19, 2016
It's unbelievable someone would trust their life to a flaky AI system! This isn't Star Trek; these systems are fundamentally unreliable. It's like playing Russian Roulette.

I'd say the autopilot feature should only be used in an emergency, and then as briefly as possible.

Jul 19, 2016
It's unbelievable someone would trust their life to a flaky AI system! This isn't Star Trek; these systems are fundamentally unreliable. It's like playing Russian Roulette.

I'd say the autopilot feature should only be used in an emergency, and then as briefly as possible.


The negation of that statement is true. Typically, automation beats top drivers; but yes, more test are needed, for there exist human drivers, crazy $hit! That is, that that does not follow logically. However, sw can compute possible trajectories of human driven missiles, i.e. the sw see's a human and goes into panic mode. Oh, panic by a machine is handled calmly, the way I design. juz say'n

Jul 19, 2016
Nanny / surveillance state. Debilitating humans. Going from a machine that is an extension of ourselves to being transported veggies. Guess everyone gets more iphone time though. Though total robot not really serious proposal.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more