University of Massachusetts Amherst computer science graduate students Kyle Wray and Luis Pineda, with their professor Shlomo Zilberstein, today described a new approach to managing the challenge of transferring control between a human and an autonomous system, in a paper they presented at the International Joint Conference on Artificial Intelligence in New York City.
Their theoretical work, tested in experiments in a driving simulator, should help to advance the development of safe semi-autonomous systems (SAS) such as self-driving cars. Such systems rely on human supervision and occasional transfer of control between the human and the automated systems, Zilberstein explains. With substantial support from the National Science Foundation and the auto industry, his lab is working on new approaches to SAS that are controlled collaboratively by a person and a machine while each capitalizes on their distinct abilities.
"Self-driving cars are coming," says Zilberstein, "but the world is fairly chaotic and not many autonomous systems can cope with that yet. My sense is that we're pretty far from having fully autonomous systems in cars." This is because artificial intelligence sensing and decision-making techniques are still limited and no matter how much training and design are used, there is no sufficiently accurate model of the real world that allows such systems to operate reliably.
For example, he suggests, "Trains might be next as a candidate for autonomy, but even then, with a downed branch on the track during a storm, a person may be needed to judge how to proceed safely."
The researcher says the example highlights a significant challenge that SAS research must address, that is, transferring control quickly, safely and smoothly between the system and the person supervising it. Most systems designed to date do not accomplish this. "Paradoxically," says Zilberstein, "as we introduce more autonomy, people become less engaged with the operation of the system and it becomes harder for them to take over control." In the paper presented today, to be published in the conference proceedings, the researchers establish precise requirements to assure that controlling entities can act reliably.
They apply the theoretical framework to semi-autonomous vehicles using a hierarchical or step-wise approach with two levels of reasoning. The high-level route planning takes into account the occasional need to transfer control, without planning it in detail. The actual transfer of control is managed by a detailed, "high-fidelity" model that notifies drivers of their expected actions and constantly monitors their reactions. It can handle situations by stopping the vehicle, for example, when the driver does not respond to the request to take over control, Zilberstein explains. Their analysis of the integrated model shows that it provides important safety guarantees.
The researchers show how to apply this general framework to SAS for vehicles and demonstrate that it maintains what they call "live state." Intuitively, this yields what they call "strong semi-autonomy," meaning that the system is never placed under the responsibility of an entity that is not prepared to handle the situation. Their experiments show that this approach uses both human and vehicle strengths well.
Zilberstein and colleagues plan to integrate this approach using a large-scale realistic driving simulator in collaboration with professors Donald Fisher and Siby Samuel, as well as postdoctoral fellow Timothy Wright of the Arbella Human Performance Lab in UMass Amherst's College of Engineering.
Developing reliable ways to transfer control back to the driver when an anomaly is detected is a crucial component of deploying self-driving cars. This work will allow the researchers to validate the new approach with human drivers controlling a self-driving car while performing a variety of tasks.
Explore further: Researchers improve artificial intelligence algorithms for semi-autonomous vehicles