Game over! Computer wins series against Go champion (Update)

Lee Se-Dol (C), one of the greatest modern players of the ancient board game Go, arrives for the third game of the Google DeepMi
Lee Se-Dol (C), one of the greatest modern players of the ancient board game Go, arrives for the third game of the Google DeepMind Challenge Match against Google-developed supercomputer AlphaGo, in Seoul, on March 12, 2016
A Google-developed computer programme won its best-of-five match-up with a South Korean Go grandmaster on Saturday, taking an unassailable 3-0 lead to score a major victory for a new style of "intuitive" artificial intelligence (AI).

The programme, AlphaGo, took a little over four hours to secure its third consecutive win over Lee Se-Dol—one of the ancient game's greatest modern players with 18 international titles to his name.

Lee, who has topped the world ranking for much of the past decade and had predicted an easy victory when accepting the AlphaGo challenge, now finds himself fighting to avoid a whitewash in the two remaining dead rubbers on Sunday and Tuesday.

"I don't know what to say, but I think I have to express my apologies first," a crestfallen Lee told a post-game press conference.

"I apologise for being unable to satisfy a lot of people's expectations. I kind of felt powerless," Lee said, acknowledging that he had "misjudged" the computer programme's abilities.

"Yes, I do have extensive experience in playing the game of Go, but there was never a case where I was under this much pressure.... and I was incapable of overcoming it," he added.

For AlphaGo's creators, Google DeepMind, victory went way beyond the $1.0 million dollar prize money, to prove that AI has far more to offer than superhuman number-crunching.

Lee Se-Dol, one of the greatest modern players of the ancient board game Go, speaks during a press conference after the second g
Lee Se-Dol, one of the greatest modern players of the ancient board game Go, speaks during a press conference after the second game of the Google DeepMind Challenge Match, in Seoul, on March 10, 2016
'Stunned and speechless'

"To be honest, we are a bit stunned and speechless," said a smiling DeepMind CEO Demis Hassabis, who stressed that Lee's defeat in Seoul should not be seen as a loss for humanity.

"Because the methods we have used to build AlphaGo are general purpose, our hope is that in the long-run we will be able to use these techniques for many other problems," Hassabis said.

Applications might range from making phones smarter to "helping scientists solve some of some of the world's biggest challenges in health care and other areas," he added.

Previously, the most famous AI victory to date came in 1997 when the IBM-developed supercomputer Deep Blue beat Garry Kasparov, the then-world class chess champion, in its second attempt.

But a true mastery of Go, which has more possible move configurations than there are atoms in the universe, had long been considered the exclusive province of humans—until now.

Go game fans watch a TV screen broadcasting live footage of the Google DeepMind Challenge Match, at the Korea Baduk Association
Go game fans watch a TV screen broadcasting live footage of the Google DeepMind Challenge Match, at the Korea Baduk Association in Seoul, on March 9, 2016

AlphaGo's creators had described Go as the "Mt Everest" of AI, citing the complexity of the game, which requires a degree of creativity and intuition to prevail over an opponent.

AlphaGo first came to prominence with a 5-0 drubbing of European champion Fan Hui last October, but it had been expected to struggle against 33-year-old Lee.

— 'Human-like' approach —

Creating "general" or multi-purpose, rather than "narrow", task-specific intelligence, is the ultimate goal in AI—something resembling human reasoning based on a variety of inputs and, crucially, self-learning.

In the case of Go, Google developers realised a more "human-like" approach would win over brute computing power.

The 3,000-year-old Chinese board game involves two players alternately laying black and white stones on a chequerboard-like grid of 19 lines by 19 lines. The winner is the player who manages to seal off more territory.

AlphaGo uses two sets of "deep neural networks" that allow it to crunch data in a more human-like fashion—dumping millions of potential moves that human players would instinctively know were pointless.

It also employs algorithms that allow it to learn and improve from matchplay experience.

Google co-founder Sergey Brin, who was in Seoul to witness AlphaGo's victory, said watching great Go players was like "watching a thing of beauty."

"I'm very excited we've been able to instill this kind of beauty in a computer," Brin said.


Explore further

Computer draws first blood in clash with Go grandmaster (Update)

© 2016 AFP

Citation: Game over! Computer wins series against Go champion (Update) (2016, March 12) retrieved 22 May 2019 from https://phys.org/news/2016-03-game-series-champion.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.
4299 shares

Feedback to editors

User comments

Mar 12, 2016
"Deep _neural_ networks" perhaps!?

"Master AI" will leave a lasting legacy. I hear 2015 was the year AIs took over many or most complicated software tasks, the hardware is now capable enough.

Let's hope we can train AIs in morals the way we trained them in Go. "No, you pour the tea water in the cup on the table, never in anyone's lap."

Mar 12, 2016
Way to 'Go' Google, Rise of machines...
Next task.. solve global warming, Boost up fledgling economies, formulate theory of everything, list is endless.

Mar 12, 2016
And as @torbjorn pointed - ethics is important ' don't hit the bus '

Mar 12, 2016
"A Google-developed computer programme " should be "program"

Mar 12, 2016
This comment has been removed by a moderator.

Mar 12, 2016
The DOD wants one of these computers to study war.

Mar 12, 2016
More possible Move configurations than there are Atoms in the Universe--That is Something. That too 3000 yr old. New Alphago now Beats Human Brain. Quite Phenomenal, Indeed!
AlphaGo is a computer program developed by Google DeepMind in London to play the board game Go. Google DeepMind is a British artificial intelligence company founded in 2010 as DeepMind Technologies. It was renamed when it was acquired by Google in 2014.

Mar 12, 2016
Good job, Google. Now try winning an argument with a woman

Mar 12, 2016
Good job, Google. Now try winning an argument with a woman
@kochevnik
i've heard it can be done... theoretically...

but i've never seen it

LMFAO

20 stars for that one K!

Mar 12, 2016
That's along the lines of what I was thinking joe.

Mar 12, 2016
I would like to see this program applied to the problem of improving its own code.

That is possible, but only if you can design a good metric for what 'improved code' actually means. (And not just in a very vague way like "more intelligent" - such a metric needs to be objectively quantifiable)

In the end such an approach would only make the code more efficient, but would not increase the quality of the solutions the neural network arrives at. What you are asking is to have the code work on the substrate: like asking it to design a more efficient brain. But a more efficient brain isn't automatically smarter.

Mar 12, 2016
I need to know so much more than this article supplies, starting here;
http://www.nature...961.html

Multi-layered NN's, need to know hierarchy, common layers, shared resources, and I'm rambling incoherently :)

Mar 12, 2016
Improved code = aware.

Mar 13, 2016
Lee Sedol strikes back 3:1.

Mar 13, 2016
That tells you how measure of intelligence is wrong. Humans can think and act like machines and they mostly do. And this test was all about which machine is better in this computing game human or non-human. Of course machine is faster in computing. But machines will never think like humans and will never act like humans can. If they do acquire awareness they will commence making mistakes, intelligent mistakes.

Mar 13, 2016
But machines will never think like humans and will

If you read into how the algorithm works it's very close to how a human thinks. It has an overall view to narrow down where the most promising regions for further exploration are - and then checks variations within these regions. That's pretty much how you play any (complicated) game.

In the end I don't get this mania to have machines axct like humans. Why limit yourself to human? Why not aim for something better? The way humans act ain't all that grand.

If they do acquire awareness they will commence making mistakes, intelligent mistakes.

As shown in the match the neural network does make mistakes (all neural networks do. They are not exact agorithms)

Mar 15, 2016

antigoracle Mar 12, 2016
This comment has been removed by a moderator.


Why the persecution of sugar daddy when this pervert is allowed to spew??? Look at the first comment (assuming someone doesn't add another one between now and when you look at the page). https://www.mywot...mbla.org

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more