Should cars be fully driverless? No, says an engineer and historian

October 13, 2015 by Peter Dizikes, Massachusetts Institute of Technology
David Mindell and the cover of “Our Robots, Ourselves” (Viking/Penguin) Credit: Len Rubenstein

If you follow technology news—or even if you don't—you have probably heard that numerous companies have been trying to develop driverless cars for a decade or more. These fully automated vehicles could potentially be safer than regular cars, and might add various efficiencies to our roads, like smoother-flowing traffic.

Or so it is often claimed. But the promise of artificial intelligence, advanced sensors, and could be achieved without full autonomy, argue scholars with deep expertise in automation and technology—including David Mindell, an MIT professor and author of a new book on the subject.

If robotics in extreme environments are any guide, Mindell says, self-driving cars should not be fully self-driving. That idea, he notes, is belied by decades of examples involving spacecraft, underwater exploration, air travel, and more. In each of those spheres, fully automated vehicles have frequently been promised, yet the most state-of-the-art products still have a driver or pilot somewhere in the network. This is one reason Mindell thinks cars are not on the road to complete automation.

"That's just proven to be a loser of an approach in a lot of other domains," Mindell says. "I'm not arguing this from first principles. There are 40 years' worth of examples."

Now Mindell, the Frances and David Dibner Professor of the History of Engineering and Manufacturing in MIT's Program in Science, Technology, and Society, and also a professor in MIT's Department of Aeronautics and Astronautics, has detailed the history in his new book, "Our Robots, Ourselves," being published Oct. 13 by Viking Books.

To be clear, Mindell thinks that "it's reasonable to hope" that technology will help cars "reduce the workload" of drivers in incremental ways in the future. But total automation, he thinks, is not the logical endpoint of vehicle development.

"The book is about a different idea of progress," Mindell says. "There's an idea that progress in robotics leads to full autonomy. That may be a valuable idea to guide research … but when automated and autonomous systems get into the real world, that's not the direction they head. We need to rethink the notion of progress, not as progress toward full autonomy, but as progress toward trusted, transparent, reliable, safe autonomy that is fully interactive: The car does what I want it to do, and only when I want it to do it."

Shooting for the "perfect 5"

To see why Mindell thinks history shows us that automation is not the endpoint of vehicular development, consider the case of undersea exploration. For decades, engineers and scientists thought that fully automated submersibles would be a step forward from the seemingly risky work of deep-sea journeys.

Instead, something unexpected happened with submersibles: Technological progress, including improved communications technologies, made it less useful to have fully automated vehicles sweeping across the sea floor. Instead, Mindell notes, submersibles "are more effective when they have even a little communication" with people monitoring and controlling them.

Or consider the Apollo program, which put U.S. astronauts on the moon six different times. Originally, Mindell notes, the expectation was that moon missions would be fully automated, with astronauts nothing more than passengers. But in the end—and partly due to the feedback of the astronauts themselves—astronauts handled many critical functions, including the moon landings.

"The sophistication of the computer and the software was used not to push people out, but to give them true control over the landing," Mindell says.

And then there are airplanes. Commercial airliners do have many automated systems, such as cruise control-type features and even systems that can automate landings in certain circumstances. But it still takes highly trained pilots to manage those systems, make critical decisions in the cockpit—and, yes, frequently to steer the planes.

"Commercial aviation is incredibly safe," says Mindell, himself a qualified civil aviation pilot with more than 1,000 hours of flying time to his credit. "Part of the reason is there are a lot of highly technical systems, but those systems are all imperfect, and the people are the glue that hold the system together. Airline pilots are constantly making small corrections, picking up mistakes, correcting the air traffic controllers."

Drawing on a concept developed by MIT professor of mechanical engineering Tom Sheridan, Mindell notes that the level of automation in a project can be judged on a scale from 1 to 10—and aiming for 10, he contends, does not necessarily lead to more success in any given endeavor, compared to a happy medium of technology and person. In the space program, Mindell reflects, "The digital computer in Apollo allowed them to make a less automated spacecraft that was closer to the perfect 5."

Full automation a "20th-century narrative"

So why, in the case of cars, are we back to a point where many people are envisioning a driverless future? In a way, Mindell says, this vision of the future belongs squarely to the past.

"I think the narrative of full autonomy is a 20th-century narrative," Mindell says. "It's a narrative of industrial mechanization that's kind of filtered its way through the 20th century, supported by 20th-century science fiction. These narratives can and should change."

Still, the idea of total automation is the approach taken by Google, most notably, in its development of self-driving cars. Yet as Mindell also observes, there are many challenges to the Google model: Its cars must identify all nearby objects correctly, need perfectly updated mapping systems, and must avoid all software glitches.

Ultimately, Mindell writes, "Google's utopian autonomy is a more brittle, less functional solution than a rich, human-centered automation." He predicts that the fully driverless model will not be the most successful, both for technical and social reasons.

"The notion of ceding control of something as fundamental to life as driving to a big, opaque corporation—people are not comfortable with that," he says. Additionally, other companies and research groups looking at automating cars are "very clearly not going for the Google approach to fully ."

Other scholars have found "Our Robots, Ourselves" to be valuable. Ian Bogost, a professor of media studies and interactive computing at Georgia Tech, calls the book "a lucid, hype-free exploration of how robotic automation really works" in tandem with human design and operation.

Mindell says he is eager to see how technologists, especially robotics engineers, react to the book. Among the places where Mindell is scheduled to speak on his current book tour are Microsoft and, yes, Google. In time, Mindell says, he believes his perspective will come to be more widely accepted, and that full on the roads will not seem as desirable a goal.

"I think the public discourse is slowly coming around to there is another way to do it," Mindell concludes.

Explore further: Driverless cars are a catch 22—we do none of the driving, but take all of the responsibility

Related Stories

Majority prefer driverless technology

July 22, 2015

While only a small percentage of drivers say they would be completely comfortable in a driverless car, a sizable amount would have no problem as long as they retain some control, according to a University of Michigan report.

How will smart cars affect the future of driving?

October 5, 2012

California, Nevada, and Florida have already made driverless cars street-legal, and continuing advances in the technology have led many to predict that the commercialization of automated vehicles is a real possibility in ...

Recommended for you

Researchers engineer a tougher fiber

February 22, 2019

North Carolina State University researchers have developed a fiber that combines the elasticity of rubber with the strength of a metal, resulting in a tougher material that could be incorporated into soft robotics, packaging ...

A quantum magnet with a topological twist

February 22, 2019

Taking their name from an intricate Japanese basket pattern, kagome magnets are thought to have electronic properties that could be valuable for future quantum devices and applications. Theories predict that some electrons ...

27 comments

Adjust slider to filter visible comments by rank

Display comments: newest first

Eikka
5 / 5 (3) Oct 13, 2015
Another way to look at the same thing is to note that having fully automated cars is not efficient use of resources - computing or otherwise.

If you require a "brain" at least as smart as a person to successfully navigate real world traffic, then in a self-driving vehicle there are at least two such brains - the car and its passenger.

Such redundancy is pointless because the passenger can become the driver, and so the car doesn't need to be nearly as smart - therefore not nearly as complex and costly.

The problem of having self-driving cars that are not as smart as we are is beautifully illustrated by taking a note of the last time we had such vehicles. They were called "horses", and they caused a huge mess in the cities exactly because they didn't have enough mental capacity to understand what it was all about and didn't blindly obey human command. When automobiles replaced horses, the accident rates actually dropped to a fraction.

antialias_physorg
5 / 5 (2) Oct 13, 2015
yet the most state-of-the-art products still have a driver or pilot somewhere in the network.

...because people are hesitant to accept being ferried around in a driverless vehicle. In a town nearby they set up a driverless tramway. Although it performed fine many people had 'trust' issues riding in them.
It's really more of a psychological problem, because it turns out humans who are supposed to intervene in the case of a critical situation are terrible at it (whether trained or not).

The thing is that it's hard to gauge for a human when such a non-recoverable situation occurs or whether the situation is still well within the computer's control - so humans tend to wait too long or interfere much too early.
It's not really possible to alleviate this with an indicator that tells someone "AI can't handle this - take over", because these are always unforeseen incidents. A pilot would have to have all foreseen situations in mind at all times to judge correctly.
ab3a
5 / 5 (2) Oct 13, 2015
I am also a pilot and an engineer and I agree with this conclusion.

Let me also point out that while highway driving, and even some urban driving is possible to automate, there will always be conditions where a human should guide the computers. My concern is how much automation is helpful and how much actually hurts. We don't want people forgetting how to drive because the computer does so much of it.

Also, there needs to be some sort of reasonable notification system in a timely fashion that something needs to be done. In other words, computers need to do something safe. That is easier said than done. The nice thing about aircraft automation is that there is usually time to take action before one is in a disastrous situation. In a car, the hazard may be much more immediate.

I predict new driver's license endorsements and training requirements for automated driving. This will be interesting...
Eikka
5 / 5 (2) Oct 13, 2015
It's really more of a psychological problem, because it turns out humans who are supposed to intervene in the case of a critical situation are terrible at it (whether trained or not).


That's a misconception. The human driver isn't there to be the circuit breaker and split-second decision maker.

The human in the loop intervenes -before- the critical situation develops. The human double checks the computer, and makes the call between good and bad information, while the computer is only good at the split-second reactions when it's already too late.

If a tram is about to pummel through an intersection without stopping because the magnetic loop sensors in the road are malfunctioning, the person can still see from a street ahead that the intersection isn't clearing up like its supposed to. They can slow the tram down in anticipation that it may need to brake.

The driverless automatic tram would happily truck ahead because it's completely unaware of what's going on.
shavera
5 / 5 (7) Oct 13, 2015
I don't know that the historian's approach is necessarily the best one. The argument is somewhat tautological. "Machines haven't been shown to be capable of independent operation, therefore machines won't be capable of independent operation."

The examples of sea and air travel are still somewhat beside the point. We've had (relatively) primitive tools available, in sensors, processing, and algorithms. But as the field has evolved, we've been able to delegate more and more to the tools rather than the human. We may not have eliminated the human from the control loop altogether, but certainly we've shrank their relevance.

So if the trend has been a decreasing level of human interaction, wouldn't it stand to reason that at some point in the future it actually hits zero?
Eikka
3.7 / 5 (3) Oct 13, 2015
We should forget that because the human is self-aware and understands what they're doing, they are able to predict what should be happening. A human predicts, a computer reacts. Humans are lousy at reacting, while computers are lousy at predicting.

A human driver anticipates what cars ahead are soon going to do much better than a computer would, and that's why we don't need to keep a rigid 2 second gap to avoid collision. We are able to predict the future correctly from cues that the computer has immense trouble even percieving, much less interpreting correctly.

For example, a number of cars several cars ahead start to wiggle from side to side, but you can't see why. You intuitively lift your foot off the accelerator, and five seconds later it turns out there's a boulder on the road that everyone's avoiding.

The computer wouldn't even notice the wiggling cars, much less try to understand what's happening until the boulder is right in front of it.
Eikka
5 / 5 (2) Oct 13, 2015
The argument is somewhat tautological. "Machines haven't been shown to be capable of independent operation, therefore machines won't be capable of independent operation."


That's a mispresentation of the argument.

The real argument given is that machines have been show to be capable of independent operation, but they've been show to operate better in co-operation with human controllers rather than completely autonomously.

The examples of sea and air travel are still somewhat beside the point.


They're exactly on the point. The tools we have, have become increasingly accurate and dependable, but they lack the necessary intelligence and awareness to correctly react to anything outside of pre-programmed parameters or to correct their own errors, and therefore require constant supervision by humans to work efficiently and safely.
Eikka
5 / 5 (3) Oct 13, 2015
Here's a comic strip that illustrates the problem rather well:

http://www.explai...5:_Tasks

" In the 60s, Marvin Minsky assigned a couple of undergrads to spend the summer programming a computer to use a camera to identify objects in a scene. He figured they'd have the problem solved by the end of the summer. Half a century later, we're still working on it."


https://en.wikipe..._paradox

Moravec's paradox is the discovery by artificial intelligence and robotics researchers that, contrary to traditional assumptions, high-level reasoning requires very little computation, but low-level sensorimotor skills require enormous computational resources.

As Moravec writes, "it is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility."
antialias_physorg
4.2 / 5 (5) Oct 13, 2015
We don't want people forgetting how to drive because the computer does so much of it.

I'm not sure about this. Driving is not an innately essential skill. Everything works just as well if people are ferried aroiund automatically.
By contrast I'm much more concerned about kids not being able to add/multiply because their phone does it for them. Not being able to judge numerical relationships is catastrophic when having to judge probabilities (read. gauge importance of issues). It's even very evident on these comment sections who can and who cant add up (it's how you identify the cranks).

there will always be conditions where a human should guide the computers.

The argument can be made that if accidents happen in those conditions (if they are are rare enough) and this is outweighed by a lot of accidents prevented from forbidding interactions (i.e. caused by false intervention by humans) then that is worth it.
antialias_physorg
4 / 5 (4) Oct 13, 2015
In a car, the hazard may be much more immediate.

That's why all the statistics collecetd with autonomous cars where humans can intervene show that humans fail. Badly. (and these are all trained engineers knowledgeable about the systems - so much more prepared than the average driver will be).

1) For a human to react to a critical situation he has to
a) Parse the situation
b) React

2)In an autonomous vehicle he has to
a) Parse the situation
b) Realize that the computer can't handle hit
c) React

As you say situations in a car are more immediate. Considerable time is spent in 2b which means the point is very quickly reached where any human intervention will make a difference. (and at the level of attentiveness this would require autonomous transport would be much more stressfull than just driving manually all the time. In a plane you are alone for long stretches of a jouney. On the road? Not so much)
Eikka
5 / 5 (2) Oct 13, 2015
I'm not sure about this. Driving is not an innately essential skill. Everything works just as well if people are ferried aroiund automatically.


Only if the automation works absolutely all the time and requires no human intervention. Until such time, people have to know how to drive. We can't even blindly trust our GPS navigators to get us where we need to go, and the people who do end up in rivers and falling into road construction holes.

The argument can be made that if accidents happen in those conditions...


Actually, the argument is that when the computers get stuck and don't know what to do, or start to behave irrationally and erratically because their rigid programming conflicts with reality, or because of a physical malfunction, people have to take over and drive the car or else nobody's going anywhere.

If nobody can drive cars, you're stuck, or worse: everybody tries and fails.
Eikka
5 / 5 (2) Oct 13, 2015
2)In an autonomous vehicle he has to
a) Parse the situation
b) Realize that the computer can't handle hit
c) React


That's just begging the question that the driver would give the computer enough rope to hang itself before taking the wheel and attempting to correct.

That's like an airline pilot approaching the landing site on autopilot in heavy crosswinds, in the dark, in rain, and going, "Hmm... let's let the computer have a go first and if it fails, we'll do it manually."

Whether a car or a plane, that's too late. You have to make the decision to trust the computer before you get into the situation. Letting the computer have a go first is just a receipe for a crash. That's why it's actually an argument against the self-driving car, because if you can't trust it to perform, you can't let it drive.

TheGhostofOtto1923
4.2 / 5 (5) Oct 13, 2015
...because people are hesitant to accept being ferried around in a driverless vehicle
Yes of course. They prefer being driven by taxi drivers and uber musheads, and riding on subways, all driven by the potentially impaired and distracted.

I encountered an old woman the other day, after dark, who was completely stopped on an on-ramp. She didnt know how to merge.

She then proceeded to drive the next 5 miles with her feet on the gas AND the brake. How did I know? Her brake light was on the whole time.

I saw 2 old people step on the gas instead of the brake in one week. One ran into the side of a ford dealership and the other hit a lamp post, continuing to press harder and harder on the gas thinking it was the brake, smoke billowing up from her spinning tires, for about 40 seconds until it stalled.

I saw an old guy pull out right in front of another car, sending it sailing into the air.

People like this need to be taken off the road. Give them AI cars.
TheGhostofOtto1923
4.3 / 5 (6) Oct 13, 2015
Actually, the argument is that when the computers get stuck and don't know what to do, or start to behave irrationally and erratically because their rigid programming conflicts with reality, or because of a physical malfunction, people have to take over and drive the car or else nobody's going anywhere
Humans get stuck and behave irrationally far more frequently than computers. And computers can be programmed to anticipate 1000s of potential conflicts on the highway, and also be programmed with the best way of dealing with them.

Humans cant.
Only if the automation works absolutely all the time and requires no human intervention
Why? Are you saying that no accidents can be tolerated?

Computers already have a much better record. They would IMMEDIATELY reduce accidents.

And as accidents inevitably happen, new info can be included in upgrades. Computer driving will get better and better as a result.
ab3a
5 / 5 (2) Oct 13, 2015
The argument can be made that if accidents happen in those conditions (if they are are rare enough) and this is outweighed by a lot of accidents prevented from forbidding interactions (i.e. caused by false intervention by humans) then that is worth it.


This is what I like to call the Boeing versus the Airbus philosophies. Airbus believes more in software, Boeing believes more in the pilot. They have very similar accident rates. Boeing has to deal with pilot error, Airbus has to deal with control software that can't easily be overridden. Both are highly automated, however, and the one common problem is that when a human has to take over, a lot of thinking has to happen real fast. Many pilots aren't ready for this. Look up the tragic AF 447 flight or Asiana flight 214.

That's why I state that where drivers need to take over, they have to be ready to react very fast. I predict a very deadly learning curve...
adam_russell_9615
5 / 5 (1) Oct 13, 2015
One thing you cant have is a 99% capable automatic car. If you expect a human to not do anything 99% of the time but be ready to step in if something goes wrong it wont work - at all. Humans will always lapse if they have nothing to do for too long.
SkyLy
3 / 5 (2) Oct 14, 2015
What kind of brand of crystal ball is this guy using ? We'll be able to fully automate cars with ease, and it won't need a full brain to perform at all, a 8W computer + captors will be plenty sufficient in a few years.
ProcrastinationAccountNumber3659
2.3 / 5 (3) Oct 14, 2015
I do not find his arguments very convincing. Looking at only past examples is not helpful with technology. You have to look at the past and try to extrapolate into the future. You have to ask what advantages does a computer have over a human?

Computer:
-Easily deals with large amounts of data.
-Communicate with other vehicles.
-Fast response time.
-Additional senses compared to a human.
-Can effectively have 100s of years of driving experience and can be trained fast.
-Can efficiently multitask (humans cannot).
-Cannot be distracted or fatigued.
-Can accurately simulate outcome of actions.
-susceptible to software/system problems.

Human:
-good prediction circuitry, but must have experienced event previously.
-new drivers are dangerous.
-limited senses.
-poor reaction time.
-easily distracted or fatigued.
-cannot multitask.
-cannot deal with too much information.
-susceptible to health problems.

I cannot justify leaving a human in control of the vehicle.
simzy39
3 / 5 (2) Oct 14, 2015
Eikka, you need to stop and think for a moment, along with researching driverless cars.
There are 1.2 million traffic fatalities annually, according to the World Health Organization. This year in Australia, over 800 deaths so far. If you watched the Ted talk on how driverless cars see, given by the head Google engineer of the project, you would realise that the computer can see more and predict more than a human. They prove it. Also, Google releases monthly reports. Just search 'Google Self-Driving Car Project Monthly Report.' Here is a direct quote: "In the six years of our project, we've been involved in 14 minor accidents during 1.8 million miles of autonomous and manual driving combined. Not once was the self-driving car the cause of the accident."
They are constantly gathering and storing data on the driving, the objects/people the cars see, and situations they encounter; all information that you and I sure as heck can't store. Also, these cars can communicate with each other.
ab3a
5 / 5 (2) Oct 14, 2015
I cannot justify leaving a human in control of the vehicle.


Nice theory. It works great until things break. Humans make mistakes. Mechanics screw up. Instruments fail.

Humans have saved countless flights on autopilot. Computers can not evaluate risk from dynamic situations such as weather --and even if they could, would you trust someone at a desk with making decisions that might maim or kill you? Computers can not diagnose every possible failure mode (they can diagnose the common ones we think about) either.

Perhaps you may not have problems getting in to a car with millions of lines of code written by people who never dreamed their code would do what it is doing now. But if the Barr v. Toyota case is any indication, there will always be problems.
TheGhostofOtto1923
4 / 5 (4) Oct 14, 2015
This is what I like to call the Boeing versus the Airbus philosophies. Airbus believes more in software, Boeing believes more in the pilot. They have very similar accident rates
At the moment. Computers can be updated with lessons learned from previous glitches. The way to fix human gliches has been to add more automation.

And so either way humans will eventually be excluded.

That pilot who miraculously landed his jet in the hudson was very skillful indeed but I would much rather trust a computer that is constantly monitoring all conditions simultaneously, equipped with the proper sensors, and immune to jitters and sweaty hands and weak bladders and nausea, to do something like that.
ab3a
3.7 / 5 (3) Oct 14, 2015
That pilot who miraculously landed his jet in the hudson was very skillful indeed but I would much rather trust a computer that is constantly monitoring all conditions simultaneously, equipped with the proper sensors, and immune to jitters and sweaty hands and weak bladders and nausea, to do something like that.


As a controls engineer for 30 years, I know the limits of automation better than most. One of the most tenacious problems is the sheer complexity of the software required to run things reliably. The human being is required as a backup mostly because there will always be instances that can not be easily tested. The more complex the software gets, the more difficult it will be to ensure it does something reasonable in every unusual case.

Driving a car is much more complex than it appears to be. I'm pretty certain that there will always be a human involved, whether the car has an autodriver or not.
I Have Questions
not rated yet Oct 17, 2015
Whatever the case that doesn't mean trying is a bad thing to do.
fay
5 / 5 (3) Oct 17, 2015

For example, a number of cars several cars ahead start to wiggle from side to side, but you can't see why. You intuitively lift your foot off the accelerator, and five seconds later it turns out there's a boulder on the road that everyone's avoiding.

The computer wouldn't even notice the wiggling cars, much less try to understand what's happening until the boulder is right in front of it.

sorry but thats utter BS. when the autocars come, the comps wont need to predict others behavior - they will actually *know* it because the cars will nonstop communicate what they are doing. So in your example the autocar wont be clueless until the last second, on the contrary, it will know about the boulder kilometers ahead. and the autocars wont need the 2 seconds either because when the car in front of yours slams the brakes yours will have reaction time in milliseconds rather than in tenths of seconds or even whole seconds if one is distracted.
Moebius
1 / 5 (1) Oct 17, 2015
Fully autonomous vehicles are a stupid idea, I've said it before. I just realized though that terrorists are going to love them. Just load them up with explosives and send them off. No need to send suicide wack jobs.
Protoplasmix
5 / 5 (2) Oct 17, 2015
Fully autonomous vehicles are a stupid idea, I've said it before.
Repeating errata until it's believed may work on Fox "News" viewers – on a science site it tends to draw the ire of everyone.

I just realized though that terrorists are going to love them. Just load them up with explosives and send them off. No need to send suicide wack jobs.
You realize the craziness of the fear mongering narrative just enough to heap more stupidity on top of it. You're ignorant of technology and fail miserably at critical thinking.
Torbjorn_Larsson_OM
5 / 5 (2) Oct 18, 2015
There is a cultural divide between US and many other nations in the matter of automation. Seems Tesla engineers worry about how to "avoid all software glitches", while Volvo engineers worry about not automating fast enough since such vehicles are safer than manned ones. (Maybe the first worry about litigation of the US kind for taking responsibility, while the latter worry about litigation of the European kind for not taking responsibility.)

Mindell's main argument is that some vehicles are not fully automated yet. But you can argue from the side of a service provider, where train passengers do not care about who is controling. There are driverless systems since 1985. [ https://en.wikipe..._systems ; https://en.wikipe...peration ] Indeed, there are strategies that want to make passenger airplanes automatic too for increased safety.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.