Game provides clue to improving remote sensing
A newly developed mathematical model that figures out the best strategy to win the popular board game CLUE© could some day help robot mine sweepers navigate strange surroundings to find hidden explosives.
At the simplest level, both activities are governed by the same principles, according to the Duke University scientists who developed the new algorithm. A player, or robot, must move through an unknown space searching for clues. In the case of CLUE, players move a pawn around the board and enter rooms seeking information about the killer and murder weapon before moving on to the next room seeking more information.
"In the same way, sensors -- like the pawn in CLUE -- must take in information about the surroundings to help the robot maneuver around obstacles as it searches for its target," said Chenghui Cai, who with Silvia Ferrari, assistant professor of mechanical engineering and materials science at Duke's Pratt School of Engineering, published the results of their latest research online in the journal IEEE Transactions on Systems, Man and Cybernetics. Cai is now a post-doctoral fellow in computer and electrical engineering at Duke.
"The key to success, both for the CLUE player and the robots, is to not only take in the new information it discovers, but to use this new information to help guide its next move," Cai said. "This learning-adapting process continues until either the player has won the game, or the robot has found the mines."
Researchers in the field of artificial intelligence research refer to these kinds of situations as "treasure hunt" problems and have developed different mathematical approaches to improve the odds of discovering this buried treasure. Games are often used to test or to help illustrate such complex problems, the scientists said.
"We found that the new algorithms we developed can be best illustrated through the board game CLUE, which is an excellent example of the treasure hunt problem," Cai explained. "We found that players who implemented the strategies based on these algorithms consistently outperformed human players and other computer programs."
Ferrari, who also directs Duke's Laboratory for Intelligent Systems and Controls (fred.mems.duke.edu/>), specializes in developing systems that attempt to mimic human thought processes for use in mechanical systems that must have the ability to react quickly in the face of changing circumstances. This includes not only as mine-sweeping applications, but such activities as security surveillance, airborne drone guidance and even criminal profiling.
The CLUE connection literally hit Ferrari out of the blue during a family game.
"One night we were playing CLUE at the kitchen table and it struck me," Ferrari said. "In the game of CLUE, you can't visit all the rooms by the end of the game, so you need to come up with a way to minimize the amount of movement but maximize the ability to reach your targets. When searching for mines, you want the robot to spend as little time as possible on the ground and maximize its information reward function."
So for the past three years, Ferrari and Cai have worked to develop a mathematical way of representing the choices and acquisition of information that takes place in such activities. After developing the new algorithm, the team tested it against experienced CLUE players, as well as players employing other types of game-playing algorithms.
For example, when players using the new algorithm played against two players using an artificial intelligence strategy known as constraint satisfaction, they won 70 percent of time. When playing against two players employing a different artificial intelligence strategy using a Bayesian network, the new algorithm led to a winning percentage of 68 percent. Against a player employing Bayesian network and a player utilizing yet another type of neural network, the new algorithm led to a victory rate of 72 percent.
"From these results, we can conclude that success achieved by players utilizing the new algorithm was due to its strategy of selecting movements and optimizing its ability to incorporate new information, while minimizing the distance traveled by the pawn," Ferrari said. "In this manner, it was able to win the game the game as quickly as possible."
Source: Duke University