The deadliest stage in self-driving development

May 30, 2018 by Hussein Dia Hussein, Swinburne University of Technology
The deadliest stage in self-driving development
When an autonomous Uber in Arizona failed to slow down it fatally hit a 49-year-old woman.

Last week, the US National Transportation Safety Board (NTSB) released its preliminary report into the Uber self-driving crash that killed a woman in March.

The NTSB found that the car identified an object on the road seconds before the crash, but the did not stop. The radar and Lidar sensors on the modified Volvo XC-90 SUV detected 49-year-old Elaine Herzberg about six seconds before the crash. The vehicle classified Herzberg first as an object, then as a vehicle, and finally as a bicycle as she was walking her bike across the street.

About a second before impact, the self-driving system determined that emergency braking was needed to avoid a collision. But, Uber had disabled the Volvo's factory-equipped automatic emergency braking system to avoid clashes with its own tech, the report said.

Things got worse from there.

The NTSB also found that Uber's self-driving software had been trained not to apply its own emergency braking in situations that risked "erratic vehicle behaviour." This was done to provide a comfortable ride: Too many false positive detections (e.g. tree leaves, shrubs or plastic bags on the ) would result in a large number of emergency brakes which no passenger will tolerate.

So, instead, the company relied on the backup driver to intervene in the last minute to avoid disaster. That did not happen.

Re-thinking Level-3 and conditional automation

Most of the self-driving testing today requires human intervention. This is what's referred to as Level-3 or conditional automation – the stage in autonomous vehicle development which I think is the most dangerous because it involves the handover of vehicle control to the backup driver in case of emergency.

Few companies have already chosen to skip Level 3 and target the safer Level 4 (full autonomy).

In fact, I would argue that Level 3 should be explicitly prohibited on open roads. Having a human step inside the control loop at the last possible minute is nothing short of a guaranteed disaster.

With both automated emergency braking systems not available in the Uber vehicle, the company was relying on the backup operator to intervene at moment's notice to prevent a crash. This is problematic because passing control from car to human poses many difficulties especially in situations when the backup operator has zoned out. Video footage showed the operator looking down immediately before the crash. She braked only after the collision. Herzberg was killed.

Level 3 is also providing drivers with a false sense of security. In March, a Tesla driver was killed in a crash in California when his vehicle was running on Autopilot. In May 2016, a Tesla driver died when his car, also on Autopilot, crashed into a truck in Florida. These vehicles are designed for driving by humans, assisted by self-driving technologies, not driven by computers with human supervision.

Regulatory intervention – the way forward

The NTSB report highlights not only the shortcomings of Uber's testing program, but also a failure in regulating tests on open roads.

A report published last year showed the readiness of self-driving software varies across the different providers. Waymo's self-driving software was 5,000 times safer than Uber's, according to the report. This was measured according to the rate of disengagements, when the automated system forced the backup driver to take control of the vehicle. Uber's rate was 1 disengagement per mile driven, while Waymo's was 1 disengagement every 5,128 miles!

The industry is self-regulating and it is unknown how they determine if their technology is safe to operate on public roads. The regulators have also failed to provide the criteria for making such determinations.

While it is necessary to test the performance of self-driving software under real-life conditions, the trials on open roads should not be about testing the safety of the systems. Safety testing should be comprehensively evaluated before allowing the vehicles on public roads.

An appropriate course of action would be for regulators to come up with a set of standardised tests, and request companies to benchmark their algorithms on the same data sets.

The regulators should follow a graduated approach to certification. First, the self-driving system is evaluated in simulation environments. This provides confidence that the system is working safely. This is followed by real-world testing in confined environments (e.g. on test-beds). Once the vehicles pass the benchmark tests, the regulators can allow them on open roads but also with safety conditions.

This tragic incident should be a catalyst for regulators to establish a strong and robust safety culture to guide innovations in self-driving technologies. Without this, autonomous vehicle deployment would go nowhere very fast.

Explore further: Feds: Uber self-driving SUV saw pedestrian, did not brake

Related Stories

Feds: Uber self-driving SUV saw pedestrian, did not brake

May 24, 2018

The autonomous Uber SUV that struck and killed an Arizona pedestrian in March spotted the woman about six seconds before hitting her, but did not stop because the system used to automatically apply brakes in potentially dangerous ...

What are these 'levels' of autonomous vehicles?

May 23, 2018

As automated and autonomous vehicles become more common on U.S. roads, it's worth a look at what these machines can – and can't – do. At the University of Michigan's Mcity, where I serve as director, we're working to ...

Autonomous driving – hands on the wheel or no wheel at all

April 12, 2018

Vehicles on the road today are getting smarter, safer and more capable. But even the newest vehicles vary widely in their advanced driver assistance systems (ADAS), which aim to enhance safety and make driving more comfortable. ...

Recommended for you

First proof of quantum computer advantage

October 18, 2018

For many years, quantum computers were not much more than an idea. Today, companies, governments and intelligence agencies are investing in the development of quantum technology. Robert König, professor for the theory of ...

Permanent, wireless self-charging system using NIR band

October 8, 2018

As wearable devices are emerging, there are numerous studies on wireless charging systems. Here, a KAIST research team has developed a permanent, wireless self-charging platform for low-power wearable electronics by converting ...

Facebook launches AI video-calling device 'Portal'

October 8, 2018

Facebook on Monday launched a range of AI-powered video-calling devices, a strategic revolution for the social network giant which is aiming for a slice of the smart speaker market that is currently dominated by Amazon and ...

0 comments

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.