Distinguishing between humans and computers in the game of go

go game
Computers and humans use different kinds of strategies when playing go, pointing to fundamental differences in solving problems.
(Phys.org)—By analyzing the statistical features of thousands of go games played by humans and computers, researchers have found that it's surprisingly easy to tell whether a game is being played by a human or by a computer. The results point to fundamental differences in the ways that humans and computers solve problems and may lead to a new kind of Turing test designed to distinguish between the two.

The researchers, C. Coquidé and B. Georgeot at the University of Toulouse, and O. Giraud at the University of Paris-Saclay, have published a paper on their statistical analysis of go games played by humans and computers in a recent issue of EPL.

"We think our work indicates a path towards a better characterization and understanding of the between human and decision-making processes, which could be applied in many different areas," Giraud told Phys.org.

As the researchers explain, go is a particularly good platform to investigate how computers solve complex problems due to the vast number of possible moves a player can make at any turn. On a 19x19 go board, there are 10171 possible legal positions (compared to "just" 1050 in chess). In addition, the number of possible games of go was recently estimated to be at least 1010^108. Such numbers are gigantic even for a computer, making it impossible for any program to simply use brute-force methods to analyze all possible moves and games. Instead, computers must use more sophisticated approaches.

In the new study, the researchers constructed databases of 8000 games played by amateur humans; 8000 games played by the software Gnugo, which uses a deterministic approach; 8000 games played by the software Fuego, which uses a Monte Carlo approach; and 50 games played by the software AlphaGo, which has become famous in the past couple years for beating world champion human go players. The researchers then built networks for each database that capture information about the patterns of moves on the go board.

One of the most interesting results is that the networks based on software—especially Gnugo—have large numbers of "communities," which are parts of a that are strongly linked within themselves but weakly linked to the rest of the network. As the researchers explain, the presence of these communities indicates that the software programs are creating many different types of strategies that are different from other types of strategies; that is, their strategies are varied and diverse. By comparison, the networks based on human games have fewer communities and more large hubs with lots of direct links, indicating that human strategies were more related to each other and less diverse.

While enlightening, these results are not unexpected, as they correspond to some previous observations of computers playing go. For instance, in 2016 and 2017, human analysts watching AlphaGo compete against world champions were often surprised and puzzled by the strategies that the computer used.

Overall, the researchers found that the statistical differences between the computer- and human-generated networks are much larger than the variability within each network, indicating that the differences are statistically significant and could potentially be used to distinguish between groups of human-played games and computer-played games. Further, the results show that it's not necessary to analyze thousands of games, as the differences could be significant even for the relatively small 50-game database from AlphaGo.

As a consequence, the researchers propose that the statistical differences could be used to design a new kind of Turing test, similar to the original test in which a person tries to tell whether they are interacting with a human or a computer by asking questions. The new version of the Turing test would involve playing go games instead of asking questions, and then performing statistical tests to identify characteristic features of human and computer players.

The researchers also expect that it would be interesting to use similar statistical methods to investigate the differences in how humans and computers approach other besides go. From this data, it may be possible to gain a better understanding of how computers "think."

"We would like to study in more detail the origin of the differences between the human-generated and computer-generated networks, to see how they reflect in terms of differences in strategies used in the game," Giraud said. "We are also planning to apply these techniques to other areas where computers and humans are present, starting with other board games such as chess."


Explore further

Google's new Go-playing AI learns fast, and even thrashed its former self

More information: C. Coquidé, B. Georgeot, and O. Giraud. "Distinguishing humans from computers in the game of go: A complex network approach." EPL. DOI: 10.1209/0295-5075/119/48001
Journal information: Europhysics Letters (EPL)

© 2017 Phys.org

Citation: Distinguishing between humans and computers in the game of go (2017, November 6) retrieved 19 April 2019 from https://phys.org/news/2017-11-distinguishing-humans-game.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.
583 shares

Feedback to editors

User comments

Nov 06, 2017
"We would like to study in more detail the origin of the differences between the human-generated and computer-generated networks, to see how they reflect in terms of differences in strategies used in the game," Giraud said.

It seems this might allow us to learn to learn (no typo) a different way. Having two ways to learn might open entirrely new ways of looking at scientific issues.

As a consequence, the researchers propose that the statistical differences could be used to design a new kind of Turing test, similar to the original test in which a person tries to tell whether they are interacting with a human or a computer by asking questions.

Maybe this could also be used as an "anti-cheating" test in online chess, poker, (or even computer games) to protect against bots?

Nov 06, 2017
Such numbers are gigantic even for a computer, making it impossible for any program to simply use brute-force methods to analyze all possible moves and games. Instead, computers must use more sophisticated approaches.


Not necessarily. Since the problem space is so huge, beating other players can be simply a matter of finding strategies that nobody else has found yet. A simpler algorithm can beat a more sophisticated algorithm by being more efficient in varying its responses - or if not more efficent then simply faster at simulating games so it can search through a larger portion of the problem space.

That is reflected in the statistical differences between humans and computers. People can only play so many games, so they converge to "cultures" where they copy strategies from other players, whereas the computer has all the time in the world as it can play thousands of games a day, so it can afford to experiment an find more different solutions.

Nov 06, 2017
Seeing as the neural net learning that enabled Alpha Go to beat Lee Sedol has itself been beaten 100 games to none with less learning, doesn't that imply that any Turing test using Go would not work as the computer could simply be taught not simply to win but to win in a more human like way? I think any Turing test has to involve symbolic language.

https://qbi.uq.ed...ree-days

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more