Training intelligent systems to think on their own

Jul 01, 2013 by Kurt Pfitzer

(Phys.org) —The computing devices and software programs that enable the technology on which the modern world relies, says Hector Muñoz-Avila, can be likened to adolescents.

Thanks to advanced known as algorithms, these systems, or agents, are now sufficiently intelligent to reason and to make responsible decisions—without —in their own environments.

Indeed, says Muñoz-Avila, an associate professor of computer science and engineering, algorithm-powered agents will soon be capable of investigating a complex problem, determining the most effective intermediate goals and taking action to achieve a long-range solution. In the process, agents will adjust to unexpected situations and learn from their environment, their cases and their mistakes.

They will achieve all of this without human control or guidance.

An agent—a robot, for example, or an automated player or the system monitoring an —that is programmed with advanced algorithms can do many things not possible for a human being, says Muñoz-Avila. It can sift through thousands of and data points, pinpoint unusual patterns or , correct most of them in real-time and single out the complex abnormalities that require .

Muñoz-Avila, a pioneer in the new field of goal-driven (GDA), recently received a three-year research grant from the National Science Foundation to develop autonomous agents that dynamically identify and self-select their goals, and to test these agents in computer games.

Prepared to deal with the unexpected

"For a long time," he says, "scientists have told agents which goals to achieve. What we want to do now is to develop agents that autonomously select their own goals and accomplish them.

"A GDA agent follows a basic cycle. It has an expectation of something that will happen in an environment. When it detects an unexpected phenomenon, it attempts to explain the between what it expected and what is actually happening. It is constantly checking when expectations are satisfied and when they are not, developing explanations for discrepancies and forming new goals to achieve them."

The potential applications of GDA agents include military planning, robotics, computer games and control systems for electrical grids and security networks. One example: unmanned vehicles that operate autonomously under water for several days while performing search or repair missions.

In recent years, Muñoz-Avila and collaborators from the Naval Research Laboratory and the Georgia Institute of Technology pioneered the topic of GDA agents, which overcome unexpected phenomena in their environments.

In his current project, Muñoz-Avila and his students have two goals—to improve and expand the knowledge that GDA agents acquire of their domains and to generalize the success of these agents to other domains and applications.

As autonomous and software gain wider use in society, says Muñoz-Avila, GDA agents must be able to recognize and diagnose discrepancies in their environments and take intelligent action.

As an example, he cites an automated air quality control system that is programmed to monitor and control a variety of devices.

"It is very difficult, if not impossible," Muñoz-Avila says, "for a programmer to foresee all of the potential situations that such a system will encounter."

Similarly, the openness of many networks requires a cyber security system capable of continuously integrating new technologies and services.

"It is not feasible to implement counter measures for all potential threats in advance," he says. "An agent-based system must continuously monitor the overall network, learn and reason about expectations, and act autonomously when discrepancies are encountered."

Two Ph.D. candidates—Ulit Jaidee and Dustin Dannenhuer—are working with Muñoz-Avila in the area of plan diversity, in which GDA agents formulate multiple solutions with significant differences to a complex problem.

Explore further: New software spots, isolates cyber-attacks to protect networked control systems

add to favorites email to friend print save as pdf

Related Stories

Argumentative agents for online deal-making

Jan 08, 2010

(PhysOrg.com) -- Software agents that play devil's advocate and quarrel with each other may not sound like something you would want in your computer. But, say a team of European researchers, argumentative agents promise faster, ...

Recommended for you

Ant colonies help evacuees in disaster zones

Apr 16, 2014

An escape route mapping system based on the behavior of ant colonies could give evacuees a better chance of reaching safe harbor after a natural disaster or terrorist attack by building a map of showing the shortest routes ...

User comments : 1

Adjust slider to filter visible comments by rank

Display comments: newest first

antialias_physorg
1 / 5 (1) Jul 01, 2013
Even though the whole field of autonomous agents is fascinating there's a couple of things in this article that sound odd:

Thanks to advanced mathematical formulas known as algorithms
You've got to be kidding with this statement, right?

algorithm-powered agents
I'm trying to imagine what a non-algorithm powered agent would look like. Rather inert I suspect.

More news stories

Hackathon team's GoogolPlex gives Siri extra powers

(Phys.org) —Four freshmen at the University of Pennsylvania have taken Apple's personal assistant Siri to behave as a graduate-level executive assistant which, when asked, is capable of adjusting the temperature ...

Better thermal-imaging lens from waste sulfur

Sulfur left over from refining fossil fuels can be transformed into cheap, lightweight, plastic lenses for infrared devices, including night-vision goggles, a University of Arizona-led international team ...

Deadly human pathogen Cryptococcus fully sequenced

Within each strand of DNA lies the blueprint for building an organism, along with the keys to its evolution and survival. These genetic instructions can give valuable insight into why pathogens like Cryptococcus ne ...