Game over! Computer wins series against Go champion (Update)

March 12, 2016
Lee Se-Dol (C), one of the greatest modern players of the ancient board game Go, arrives for the third game of the Google DeepMi
Lee Se-Dol (C), one of the greatest modern players of the ancient board game Go, arrives for the third game of the Google DeepMind Challenge Match against Google-developed supercomputer AlphaGo, in Seoul, on March 12, 2016

A Google-developed computer programme won its best-of-five match-up with a South Korean Go grandmaster on Saturday, taking an unassailable 3-0 lead to score a major victory for a new style of "intuitive" artificial intelligence (AI).

The programme, AlphaGo, took a little over four hours to secure its third consecutive win over Lee Se-Dol—one of the ancient game's greatest modern players with 18 international titles to his name.

Lee, who has topped the world ranking for much of the past decade and had predicted an easy victory when accepting the AlphaGo challenge, now finds himself fighting to avoid a whitewash in the two remaining dead rubbers on Sunday and Tuesday.

"I don't know what to say, but I think I have to express my apologies first," a crestfallen Lee told a post-game press conference.

"I apologise for being unable to satisfy a lot of people's expectations. I kind of felt powerless," Lee said, acknowledging that he had "misjudged" the computer programme's abilities.

"Yes, I do have extensive experience in playing the game of Go, but there was never a case where I was under this much pressure.... and I was incapable of overcoming it," he added.

For AlphaGo's creators, Google DeepMind, victory went way beyond the $1.0 million dollar prize money, to prove that AI has far more to offer than superhuman number-crunching.

Lee Se-Dol, one of the greatest modern players of the ancient board game Go, speaks during a press conference after the second g
Lee Se-Dol, one of the greatest modern players of the ancient board game Go, speaks during a press conference after the second game of the Google DeepMind Challenge Match, in Seoul, on March 10, 2016

'Stunned and speechless'

"To be honest, we are a bit stunned and speechless," said a smiling DeepMind CEO Demis Hassabis, who stressed that Lee's defeat in Seoul should not be seen as a loss for humanity.

"Because the methods we have used to build AlphaGo are general purpose, our hope is that in the long-run we will be able to use these techniques for many other problems," Hassabis said.

Applications might range from making phones smarter to "helping scientists solve some of some of the world's biggest challenges in health care and other areas," he added.

Previously, the most famous AI victory to date came in 1997 when the IBM-developed supercomputer Deep Blue beat Garry Kasparov, the then-world class chess champion, in its second attempt.

But a true mastery of Go, which has more possible move configurations than there are atoms in the universe, had long been considered the exclusive province of humans—until now.

Go game fans watch a TV screen broadcasting live footage of the Google DeepMind Challenge Match, at the Korea Baduk Association
Go game fans watch a TV screen broadcasting live footage of the Google DeepMind Challenge Match, at the Korea Baduk Association in Seoul, on March 9, 2016

AlphaGo's creators had described Go as the "Mt Everest" of AI, citing the complexity of the game, which requires a degree of creativity and intuition to prevail over an opponent.

AlphaGo first came to prominence with a 5-0 drubbing of European champion Fan Hui last October, but it had been expected to struggle against 33-year-old Lee.

— 'Human-like' approach —

Creating "general" or multi-purpose, rather than "narrow", task-specific intelligence, is the ultimate goal in AI—something resembling human reasoning based on a variety of inputs and, crucially, self-learning.

In the case of Go, Google developers realised a more "human-like" approach would win over brute computing power.

The 3,000-year-old Chinese board game involves two players alternately laying black and white stones on a chequerboard-like grid of 19 lines by 19 lines. The winner is the player who manages to seal off more territory.

AlphaGo uses two sets of "deep neural networks" that allow it to crunch data in a more human-like fashion—dumping millions of potential moves that human players would instinctively know were pointless.

It also employs algorithms that allow it to learn and improve from matchplay experience.

Google co-founder Sergey Brin, who was in Seoul to witness AlphaGo's victory, said watching great Go players was like "watching a thing of beauty."

"I'm very excited we've been able to instill this kind of beauty in a computer," Brin said.

Explore further: Computer draws first blood in clash with Go grandmaster (Update)

Related Stories

AI beats human Go grandmaster... again (Update)

March 10, 2016

A Google-developed supercomputer bested a South Korean Go grandmaster again Thursday, taking a commanding 2-0 lead in a five-game series that has become a stunning global debut for a new style of "intuitive" artificial intelligence ...

Human vs machine Go showdown kicks off in Seoul

March 9, 2016

A 3,000-year-old Chinese board game was the focus of a very 21st century showdown Wednesday as South Korean Go grandmaster Lee Se-Dol kicked off his highly anticipated clash with the Google-developed supercomputer, AlphaGo.

Recommended for you

Scientists write 'traps' for light with tiny ink droplets

October 23, 2017

A microscopic 'pen' that is able to write structures small enough to trap and harness light using a commercially available printing technique could be used for sensing, biotechnology, lasers, and studying the interaction ...

When words, structured data are placed on single canvas

October 22, 2017

If "ugh" is your favorite word to describe entering, amending and correcting data on the rows and columns on spreadsheets you are not alone. Coda, a new name in the document business, feels it's time for a change. This is ...

Enhancing solar power with diatoms

October 20, 2017

Diatoms, a kind of algae that reproduces prodigiously, have been called "the jewels of the sea" for their ability to manipulate light. Now, researchers hope to harness that property to boost solar technology.

19 comments

Adjust slider to filter visible comments by rank

Display comments: newest first

torbjorn_b_g_larsson
5 / 5 (1) Mar 12, 2016
"Deep _neural_ networks" perhaps!?

"Master AI" will leave a lasting legacy. I hear 2015 was the year AIs took over many or most complicated software tasks, the hardware is now capable enough.

Let's hope we can train AIs in morals the way we trained them in Go. "No, you pour the tea water in the cup on the table, never in anyone's lap."
srikkanth_kn
1 / 5 (1) Mar 12, 2016
Way to 'Go' Google, Rise of machines...
Next task.. solve global warming, Boost up fledgling economies, formulate theory of everything, list is endless.
srikkanth_kn
1 / 5 (1) Mar 12, 2016
And as @torbjorn pointed - ethics is important ' don't hit the bus '
petrostherock
1 / 5 (1) Mar 12, 2016
"A Google-developed computer programme " should be "program"
antigoracle
Mar 12, 2016
This comment has been removed by a moderator.
RichManJoe
1 / 5 (1) Mar 12, 2016
The DOD wants one of these computers to study war.
betterexists
1 / 5 (1) Mar 12, 2016
More possible Move configurations than there are Atoms in the Universe--That is Something. That too 3000 yr old. New Alphago now Beats Human Brain. Quite Phenomenal, Indeed!
AlphaGo is a computer program developed by Google DeepMind in London to play the board game Go. Google DeepMind is a British artificial intelligence company founded in 2010 as DeepMind Technologies. It was renamed when it was acquired by Google in 2014.
kochevnik
4.2 / 5 (5) Mar 12, 2016
Good job, Google. Now try winning an argument with a woman
Captain Stumpy
4 / 5 (4) Mar 12, 2016
Good job, Google. Now try winning an argument with a woman
@kochevnik
i've heard it can be done... theoretically...

but i've never seen it

LMFAO

20 stars for that one K!
matt_s
1 / 5 (1) Mar 12, 2016
That's along the lines of what I was thinking joe.
richardwenzel987
1 / 5 (1) Mar 12, 2016
I would like to see this program applied to the problem of improving its own code.
antialias_physorg
5 / 5 (4) Mar 12, 2016
I would like to see this program applied to the problem of improving its own code.

That is possible, but only if you can design a good metric for what 'improved code' actually means. (And not just in a very vague way like "more intelligent" - such a metric needs to be objectively quantifiable)

In the end such an approach would only make the code more efficient, but would not increase the quality of the solutions the neural network arrives at. What you are asking is to have the code work on the substrate: like asking it to design a more efficient brain. But a more efficient brain isn't automatically smarter.
TehDog
5 / 5 (2) Mar 12, 2016
I need to know so much more than this article supplies, starting here;
http://www.nature...961.html

Multi-layered NN's, need to know hierarchy, common layers, shared resources, and I'm rambling incoherently :)
dan42day
1 / 5 (1) Mar 12, 2016
Improved code = aware.
NeutronicallyRepulsive
1 / 5 (1) Mar 13, 2016
Lee Sedol strikes back 3:1.
richardwenzel987
3 / 5 (2) Mar 13, 2016
@anti-alias-- well, just as an exercise. It would be a step in the direction of self-reference. If it could make itself more efficient, that, alone would be interesting. I agree that some sort of selection process would be required. In the case of our own brains, survival in an external world (later incorporating a social world) provided the filter. But that is really slow. Should be some way to make a pathway to a smarter AI program more efficient and autonomous, in the sense that you leave your program alone for a while and then come back in a few years to discover that you are obsolete.
PhysicsMatter
not rated yet Mar 13, 2016
That tells you how measure of intelligence is wrong. Humans can think and act like machines and they mostly do. And this test was all about which machine is better in this computing game human or non-human. Of course machine is faster in computing. But machines will never think like humans and will never act like humans can. If they do acquire awareness they will commence making mistakes, intelligent mistakes.
antialias_physorg
5 / 5 (3) Mar 13, 2016
But machines will never think like humans and will

If you read into how the algorithm works it's very close to how a human thinks. It has an overall view to narrow down where the most promising regions for further exploration are - and then checks variations within these regions. That's pretty much how you play any (complicated) game.

In the end I don't get this mania to have machines axct like humans. Why limit yourself to human? Why not aim for something better? The way humans act ain't all that grand.

If they do acquire awareness they will commence making mistakes, intelligent mistakes.

As shown in the match the neural network does make mistakes (all neural networks do. They are not exact agorithms)
antigoresockpuppet
3 / 5 (2) Mar 15, 2016

antigoracle Mar 12, 2016
This comment has been removed by a moderator.


Why the persecution of sugar daddy when this pervert is allowed to spew??? Look at the first comment (assuming someone doesn't add another one between now and when you look at the page). https://www.mywot...mbla.org

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.