New study challenges long-accepted views on human-autonomy interaction

New study challenges long-accepted views on human-autonomy interaction
Army scientists and engineers have challenged long-held views in the area of human-autonomy interaction. Credit: US Army research Laboratory

A team of Army scientists and engineers have challenged long-held views in the area of human-autonomy interaction to change the way science involves people, especially in developing advanced technical systems that involve artificial intelligence and autonomy.

As part of a research program initially funded in 2013 by the Office of the Secretary of Defense, U.S. Army Research Laboratory researchers led a multi-disciplinary team of Department of Defense, industry, and academic researchers to develop a novel, general-purpose principled framework.

The research team proposes what they've named the Privileged Sensing Framework, which was conceived to leverage recent advances in human sensing technologies to dynamically integrate human and autonomous agents on the basis of their individual characteristics. For example, Humans tend to easily adapt to changes in the environment or task. Autonomous agents typically can process large amounts of data more quickly than humans, Marathe explained.

The focus of this research was to demonstrate how the PSF preserves the human as a primary, critical and central authority while also enabling , like robots, to detect and mitigate when people's decisions or actions would lead to dysfunction or even catastrophe, said Dr. Amar Marathe, a researcher in ARL's Real-World Soldier Quantification Branch.

"The research was fundamentally enabled by a critical move towards a novel control systems framework that can account for dynamic interactions among information components that impact the value of that information and yet appropriately propagates into robust overall decisions. The PSF provides an evolved approach to HAI that treats the human as a special class of sensor rather than as the ultimate and absolute command arbiter.

The PSF was based on the concept of appropriately 'privileging' information during the process of integration to provide special rights to specific agents based on their capabilities within the current task context, and the performance goals.

"Through a series of simulation experiments, the PSF significantly improved joint human-autonomy performance without sacrificing the gains to be made from incorporating human strengths.

"Additional studies have extended this approach into a wide range of applications that include joint human-autonomy driving, human-autonomy target detection, and command and control. Overall, these efforts provide further evidence that the incorporation of the principles of the PSF can provide improved performance of joint human-autonomy systems across a wide range of applications," said Marathe.

He said future efforts will focus on developing novel methods for incorporating the PSF into experimental human-autonomy systems to enable further testing of the impact this approach on human-autonomy system performance, and generalizing the framework to accommodate a variety of tasks and scenarios.

In about 20 years or so, Marathe estimates, the inception of a generalizable framework that incorporates dynamic estimates of human capabilities to facilitate and advance human-autonomy interaction, the researchers argue, provides rich opportunity to revolutionize capabilities of multi-agent cooperative teams across a broad range of applications.

Human-automation integration challenges were addressed in human-computer coupled visual search, real-time mitigation of mistrust in automation, advanced commander decision aides, and in-the-loop test and evaluation of human-robot systems.

Marathe said the research was motivated by persistent, fundamental issues that have thus far precluded the transition of advanced automation and autonomous technologies from the laboratory into the operational environment.

"Generally, humans readily adapt to varying task and environmental complexities during decision making and therefore are often treated as a failsafe for cases where autonomous technology underperforms. However, humans are constantly changing due to factors such as fatigue or shifts in attention, which means that even skilled humans sometimes make errors. The inherent variability in human performance makes the problem of integrating humans in the loop with extremely challenging," he said.

Until recently, most frameworks for human autonomy integration (HAI) have preserved a central role for the human while neglecting the important role of human variability, Marathe noted. "As a result, human excellence has not been fully exploited and neither has human failure been fully offset, leaving joint human-autonomy systems fundamentally incapable of achieving their full potential."

Provided by U.S. Army Research Laboratory

Citation: New study challenges long-accepted views on human-autonomy interaction (2017, August 17) retrieved 1 May 2024 from https://phys.org/news/2017-08-long-accepted-views-human-autonomy-interaction.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further

On the road to autonomy, remember the operator

3 shares

Feedback to editors