Germany conducting inquiry into Tesla autopilot system

Der Spiegel reports conclude the autopilot function represents "a considerable danger for traffic"
Der Spiegel reports conclude the autopilot function represents "a considerable danger for traffic"

Germany said Saturday it was still investigating the operation of the autopilot system on cars made by electric automaker Tesla, as German media reported an internal ministerial report called it "dangerous".

According to the weekly news magazine Der Spiegel, an internal ministry report had concluded that the autopilot function represents "a considerable danger for traffic", especially because the driver is not warned when the autopilot system is not able to handle a situation.

"The final evaluation of the transport ministry concerning the autopilot functioning of Tesla Model S cars is not yet ready," the ministry told AFP, denying the reports that it had already concluded the probe.

Available for Tesla's Model S electric cars since October 2015, the driverless autopilot system has faced global scrutiny following fatal crashes in northern China in January and in the US state of Florida in May.

In September a Tesla electric car crashed into a tourist bus on a motorway in northern Germany, after the driver who was the only one slightly injured claimed he had activated the autopilot system.

A Tesla spokesperson at the time said the driver told the company the autopilot was functioning properly and its use was unrelated to the accident.

And on Saturday a Tesla spokesman told AFP: "We have always been clear with our customers that Autopilot is a drivers assistance system that requires the driver to pay attention at all times."

"Just as in an aeroplane, when used properly, autopilot reduces driver workload and provides an added layer of safety when compared to purely manual driving," he added.

Consumer activists have called on the company, founded by PayPal billionaire Elon Musk, to disable the feature until it is updated to detect whether the driver's hands are on the steering wheel during operation—as the company says should be the case.


Explore further

Tesla 'on autopilot' crashes on German Autobahn: police (Update)

© 2016 AFP

Citation: Germany conducting inquiry into Tesla autopilot system (2016, October 8) retrieved 23 July 2019 from https://phys.org/news/2016-10-germany-inquiry-tesla-autopilot.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.
341 shares

Feedback to editors

User comments

Oct 08, 2016
These systems, which purport to drive a vehicle safely until they can't and return control of the vehicle to the driver, are inherently dangerous.

* They have severely limited ability compared to a human driver.

* When they fail and the human driver must take over, the human driver has not been maintaining a mental map of the contents of the road and the relative speed and acceleration of the surrounding vehicles. At that point, the human driver is severely handicapped in his ability to quickly respond because of that lack of immediate information.

We will continue to see easily avoidable accidents from these systems, some serious like the Tesla which failed to stop and ran its vehicle under a transfer truck, killing the driver who was not driving at the time.

Oct 08, 2016
'' that requires the driver to pay attention at all times.'
i thought the whole point was to drink and watch a movie with out the risk there is now .

Oct 09, 2016
I'm curious how the accident rate of the Telsa system compares to human only driving.

Oct 09, 2016
Musk will have another carpet-chewing fit because of reports like this.

Oct 09, 2016
We will continue to see easily avoidable accidents from these systems, some serious like the Tesla which failed to stop and ran its vehicle under a transfer truck, killing the driver who was not driving at the time
And they will be far fewer and less severe than the all-human variety. And unlike humans, AI cars will incorporate lessons learned and so will get better and better at it.

Machines are indifferent to little dashboard messiahs and bobblehead godmothers and rosary beads dangling from radar detectors.

Oct 09, 2016
We have always been clear with our customers that Autopilot is a drivers assistance system that requires the driver to pay attention at all times.


Then why call it "Autopilot"? That has a definite contrary connotation. Why not call it "Driver Assist" or "Enhanced Cruise Control"?

People have been trained by years of stupid manufacturers warnings to assume that they are only provided to limit liability. How many people really wear safety goggles to peel carrots?

Oct 10, 2016
There are three kinds of people:
1. Those who are too uninformed to understand what autopilots are for and how they can and do help us
2. Those who take autopilots as an easy scapegoat for their own human incompetence, because they won't fight back
3. Those who enjoy the benefits of advanced technology which reduces overall risks and effort, if used correctly

There are also people that belong to both the first and second group. Unfortunately, these are the most vocal...

Oct 10, 2016
OK, I can see why they are checking this. Driving in germany is more stressful than, e.g., in the US (higher speeds, narrower lanes) - which means the time for decision making is reduced. So it#s natural the an 'autopilot' feature will be more prone to reacting too late or give more false positives in such an environment.

On the other hand this is utter stupidity:
the driver is not warned when the autopilot system is not able to handle a situation.

Look: if an algorithm fails it's because there are inputs that aren't expected or are missing or there's something wrong in the way it evaluates some information. If an algorithm KNEW when it failes then the engineers would have changed it to not fail in that situation instead of telling the driver "I just failed".

Oct 10, 2016
And unlike humans, AI cars will incorporate lessons learned and so will get better and better at it.


That's like saying you can make an earthworm able to drive a car by giving it an internet connection. The AI is still limited in its ability to discriminate, analyze and interpret its surroundings and no amount of data can overcome the fundamental incompetence of a stupid AI on a slow CPU.

then the engineers would have changed it to not fail in that situation


That's assuming it is possible given the limited cognitive power of the AI.

The AI in a car like Tesla's is so simple that it can't build an abstract understanding of its surroundings, so it can't learn higher level rules to deal with complexity. Instead, the engineers have to program in exceptions and special cases to look for each possible eventuality at a very low level - if practically possible - which will quickly overwhelm the CPU with just too many tasks to handle.

Oct 10, 2016
If an algorithm KNEW when it failes


It can know when it fails if the confidence functions to e.g. telling a sign is a sign return low. Question is, whether it's better that the car tells you it's having trouble correctly identifying its environment, or whether it should remain quiet and instill a false confidence in the real driver.

The trouble right now is the primitive state of the AI, because it IS confused and running on best guesses and averages, or external non-real-time information, which most of the time are perfectly adequate because most of the time driving is very predictable.

You have to imagine the Tesla autopilot like a blind man behind the wheel, who has a number of beepers to report proximity to other cars or center of lane. He's also got a satnav to tell him about intersections and speed limits. Beyond that, he's got no knowledge of what is happening. With enough practice it is possible to drive like that, but you wouldn't trust it to be safe.

Oct 10, 2016
The blind man analog is apt, because after the trailer-decapitation incident Tesla changed the AI to rely more on radar and sonar data, rather than the camera feed which proved to be inadequate.

They only have so much computing power onboard, and while a supercomputer could eventually compare terabytes of data to correctly identify anything in the picture, the Tesla car only has the equivalent of a laptop computer running any visual identification algorithm, and the AI. After all, they can't spend kilowatts of power just running the brains of the machine.

Consequently, it just doesn't have the memory or the processing power to do an adequate job of it. To make it see more - like see people instead of just "blobs" - you'd have to program it with how people look like, in various outfits and environmental conditions - which is obviously an enormous amount of data to deal with. Far more than the little computer is capable of.


Oct 10, 2016
Plus, there's serious problems with the Tesla sensors

https://nakedsecu...ardware/

Tesla Model S's autopilot can be blinded with off-the-shelf hardware


And as I've pointed out before, other cars with similiar tech can provide the same interference simply by increasing the radar/sonar noise in the environment to the point that the cars can no longer tell which signal is theirs.

Jamming attacks can prevent ultrasonic sensors from detecting objects and cause collisions. In self-parking and summon mode, the Tesla model S car will ignore obstacles and crash into them during a jamming attack.

When the radio interference is switched on, the radio waves bouncing from the cart back to the Tesla are drowned out: you can see, in the video below, how the blue car icon disappears from the screen, meaning that the car's autopilot has been blinded to the obstacle in its path.

Oct 10, 2016
I don't see the problem, if a driver is using (non adaptive) cruise control and smashes in to the back of a car because he wasn't paying attention, is that the cruise controls fault? no.

Oct 10, 2016
I don't see the problem, if a driver is using (non adaptive) cruise control and smashes in to the back of a car because he wasn't paying attention, is that the cruise controls fault? no.


But the Tesla cruise control is supposed to be adaptive, and it is sold by the name of "autopilot" when it is clearly not able to autonomously pilot the vehicle in a safe manner.

Either the product is faulty, or Tesla is engaging in false advertising.

Oct 10, 2016
i want to have beer and watch the simpsons while being driven.

Oct 10, 2016
This comment has been removed by a moderator.

Oct 11, 2016
That's like saying you can make an earthworm able to drive a car by giving it an internet connection
Look, youve made it clear that youre categorically against AI cars and are unwilling to avail yourself of all the latest data that proves you wrong. And your large fatty posts confirm this.

So prattle on - be my guest.

Oct 15, 2016
Look, youve made it clear that youre categorically against AI cars


No I'm not. I'm categorically against people pretending the technology they currently have to be something it is not: intelligent.

In other words, I'm categorically against propaganda and lying.

latest data that proves you wrong


Give it to me. There's absolutely nothing indicating that Tesla is doing the right thing instead of just doing the easy thing, and the cheap thing, and adding more bubble gum and gaffer tape on top as they go. Elon Musk's motto is practically "it ain't a problem until it is a problem".

Did you happen to read the article I posted some time ago, where some insiders noted how Musk was totally unconcerned that the self-parking radar would be unable to detect objects the size of a cat, and he fired the engineer who expressed concern that the autopilot wasn't safe?

Oct 16, 2016
Look: if an algorithm fails it's because there are inputs that aren't expected or are missing or there's something wrong in the way it evaluates some information. If an algorithm KNEW when it failes then the engineers would have changed it to not fail in that situation instead of telling the driver "I just failed".


Not necessarily, if they are using a probabilistic classifier on sensory data it will have a confidence level, if that confidence level falls below a threshold then it can alert the driver to take control of the vehicle; this I assume is what Tesla's auto pilot already does. it won't help in scenarios when it has high confidence to take a given action which turns out to be the wrong action, these will be edge cases that the system will need to be further trained on.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more