Training intelligent systems to think on their own

Jul 01, 2013 by Kurt Pfitzer

(Phys.org) —The computing devices and software programs that enable the technology on which the modern world relies, says Hector Muñoz-Avila, can be likened to adolescents.

Thanks to advanced known as algorithms, these systems, or agents, are now sufficiently intelligent to reason and to make responsible decisions—without —in their own environments.

Indeed, says Muñoz-Avila, an associate professor of computer science and engineering, algorithm-powered agents will soon be capable of investigating a complex problem, determining the most effective intermediate goals and taking action to achieve a long-range solution. In the process, agents will adjust to unexpected situations and learn from their environment, their cases and their mistakes.

They will achieve all of this without human control or guidance.

An agent—a robot, for example, or an automated player or the system monitoring an —that is programmed with advanced algorithms can do many things not possible for a human being, says Muñoz-Avila. It can sift through thousands of and data points, pinpoint unusual patterns or , correct most of them in real-time and single out the complex abnormalities that require .

Muñoz-Avila, a pioneer in the new field of goal-driven (GDA), recently received a three-year research grant from the National Science Foundation to develop autonomous agents that dynamically identify and self-select their goals, and to test these agents in computer games.

Prepared to deal with the unexpected

"For a long time," he says, "scientists have told agents which goals to achieve. What we want to do now is to develop agents that autonomously select their own goals and accomplish them.

"A GDA agent follows a basic cycle. It has an expectation of something that will happen in an environment. When it detects an unexpected phenomenon, it attempts to explain the between what it expected and what is actually happening. It is constantly checking when expectations are satisfied and when they are not, developing explanations for discrepancies and forming new goals to achieve them."

The potential applications of GDA agents include military planning, robotics, computer games and control systems for electrical grids and security networks. One example: unmanned vehicles that operate autonomously under water for several days while performing search or repair missions.

In recent years, Muñoz-Avila and collaborators from the Naval Research Laboratory and the Georgia Institute of Technology pioneered the topic of GDA agents, which overcome unexpected phenomena in their environments.

In his current project, Muñoz-Avila and his students have two goals—to improve and expand the knowledge that GDA agents acquire of their domains and to generalize the success of these agents to other domains and applications.

As autonomous and software gain wider use in society, says Muñoz-Avila, GDA agents must be able to recognize and diagnose discrepancies in their environments and take intelligent action.

As an example, he cites an automated air quality control system that is programmed to monitor and control a variety of devices.

"It is very difficult, if not impossible," Muñoz-Avila says, "for a programmer to foresee all of the potential situations that such a system will encounter."

Similarly, the openness of many networks requires a cyber security system capable of continuously integrating new technologies and services.

"It is not feasible to implement counter measures for all potential threats in advance," he says. "An agent-based system must continuously monitor the overall network, learn and reason about expectations, and act autonomously when discrepancies are encountered."

Two Ph.D. candidates—Ulit Jaidee and Dustin Dannenhuer—are working with Muñoz-Avila in the area of plan diversity, in which GDA agents formulate multiple solutions with significant differences to a complex problem.

Explore further: Forging a photo is easy, but how do you spot a fake?

add to favorites email to friend print save as pdf

Related Stories

Argumentative agents for online deal-making

Jan 08, 2010

(PhysOrg.com) -- Software agents that play devil's advocate and quarrel with each other may not sound like something you would want in your computer. But, say a team of European researchers, argumentative agents promise faster, ...

Recommended for you

Forging a photo is easy, but how do you spot a fake?

6 hours ago

Faking photographs is not a new phenomenon. The Cottingley Fairies seemed convincing to some in 1917, just as the images recently broadcast on Russian television, purporting to be satellite images showin ...

Algorithm, not live committee, performs author ranking

10 hours ago

Thousands of authors' works enter the public domain each year, but only a small number of them end up being widely available. So how to choose the ones taking center-stage? And how well can a machine-learning ...

Professor proposes alternative to 'Turing Test'

Nov 19, 2014

(Phys.org) —A Georgia Tech professor is offering an alternative to the celebrated "Turing Test" to determine whether a machine or computer program exhibits human-level intelligence. The Turing Test - originally ...

Image descriptions from computers show gains

Nov 18, 2014

"Man in black shirt is playing guitar." "Man in blue wetsuit is surfing on wave." "Black and white dog jumps over bar." The picture captions were not written by humans but through software capable of accurately ...

Converting data into knowledge

Nov 17, 2014

When a movie-streaming service recommends a new film you might like, sometimes that recommendation becomes a new favorite; other times, the computer's suggestion really misses the mark. Yisong Yue, assistant ...

User comments : 1

Adjust slider to filter visible comments by rank

Display comments: newest first

antialias_physorg
1 / 5 (1) Jul 01, 2013
Even though the whole field of autonomous agents is fascinating there's a couple of things in this article that sound odd:

Thanks to advanced mathematical formulas known as algorithms
You've got to be kidding with this statement, right?

algorithm-powered agents
I'm trying to imagine what a non-algorithm powered agent would look like. Rather inert I suspect.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.