Ready, Set, Go! Rematch of man vs machine in ancient game

Google's artificial intelligence programme AlphaGo will face the world's top-ranked Go player, China's 19-year-old Ke Jie, in a
Google's artificial intelligence programme AlphaGo will face the world's top-ranked Go player, China's 19-year-old Ke Jie, in a contest expected to end in another victory for rapid advances in AI

It's man vs machine this week as Google's artificial intelligence programme AlphaGo faces the world's top-ranked Go player in a contest expected to end in another victory for rapid advances in AI.

China's 19-year-old Ke Jie is given little chance in the three-game series beginning Tuesday in the eastern Chinese city of Wuzhen after AlphaGo stunned observers last year by trouncing South Korean grandmaster Lee Se-Dol four games to one.

Lee's loss in Seoul marked the first time a computer programme had beaten a top player in a full match in the 3,000-year-old Chinese board game, and has been hailed as a landmark event in the development of AI.

AI has previously beaten humans in cerebral contests, starting with IBM's Deep Blue defeating chess grandmaster Garry Kasparov in 1997, but AlphaGo's win last year is considered the most significant win for AI yet.

Go is considered perhaps the most complex game ever devised, with an incomputable number of move options that puts a premium on "intuition."

Proponents had considered it a bastion in which human thought would remain superior, at least for the foreseeable future.

AlphaGo's triumph fuelled hopes of a brave new world in which AI is applied not only to driverless cars or "smart homes", but to helping mankind figure out some of the most complex scientific, technical, and medical problems.

"AlphaGo's successes hint at the possibility for general AI to be applied to a wide range of tasks and areas, to perhaps find solutions to problems that we as human experts may not have considered," Demis Hassabis, founder of London-based DeepMind, which developed AlphaGo, said ahead of this week's matches.

AI's ultimate goal is to create "general" or multi-purpose, rather than "narrow," task-specific intelligence—something resembling human reasoning and the ability to learn.

Sci-fi nightmare?

But for some, it conjures sci-fi images of a future in which machines "wake up" and enslave humanity.

Physicist Stephen Hawking is a leading voice for caution, warning in 2015 that computers may outsmart humans, "potentially subduing us with weapons we cannot even understand."

Ke faces AlphaGo on Tuesday, Thursday and Saturday.

Ke is a brash prodigy who went pro at 11 years old, has been world number one for more than two years, and has described himself as a "pretentious prick".

After AlphaGo flattened Lee, Ke declared he would never lose to the machine.

"Bring it on," he said on China's Twitter-like Weibo.

But he has tempered his bravado since then.

Ke was among many top Chinese players who were trounced in online contests in January by a mysterious adversary who reportedly won 60 straight victories.

That opponent—cheekily calling itself "The Master"—was later revealed by DeepMind to have been an updated AlphaGo.

"Even that was not AlphaGo's best performance," Gu Li, a past national champion, told Chinese state media last week.

"It would be very hard for Ke to play against it, but then again, Ke has also been working extremely hard to change his methods in preparation. I hope he can play well."

Go involves two players alternately laying black and white stones on a grid. The winner is the player who seals off the most territory.

AlphaGo uses two sets of "" containing millions of connections similar to neurons in the brain.

It is partly self-taught—having played millions of games against itself after initial programming.


Explore further

Where does AlphaGo go?

© 2017 AFP

Citation: Ready, Set, Go! Rematch of man vs machine in ancient game (2017, May 21) retrieved 21 May 2019 from https://phys.org/news/2017-05-ready-rematch-machine-ancient-game.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.
567 shares

Feedback to editors

User comments

May 21, 2017
AlphaGo has been able to study human masters, including competitors who haven't (yet) changed their methods. Those humans haven't (yet) been able to study AlphaGo, at least not nearly as much. It will be interesting when humans compete with AlphaGo who have been able to study as many AlphaGo games. We might see human masters better at playing AlphaGo's style than any humans have been at playing humans style. And perhaps some of those might beat AlphaGo.

Until AlphaGo studies them and changes its style. Finally a predator/prey cycle with humans in it again.

May 22, 2017
The most complex game i've ever seen is Dota.

Incomplete information, immense complexity- no two games are the same even 2 seconds after the game starts- and the combinations are nearly endless. Understanding of the minds of 9 other people is needed and prediction of their moves, when you don't know what 5 of them are doing most of the time. Some randomness.

The game also has strategy , positioning and tactics pretty similar to chess.

May 22, 2017
Until AlphaGo studies them and changes its style.

That's not as easy as it sounds. In Go (and in all other games that don't include an element of chance) you often have two factors that determine large parts of play:
Joseki - which is a set of known optimal moves (i.e. where any deviation from the sequence will lead to a weakening of the position. Often these are opening moves but can also be mid/end-game moves when a certain constellation occurs)
and
Atari - which is a forced move (i.e. a situation where not making a certain move is the same as losing the game or turning a potentially winning situation into -at best- a draw)

In extreme cases you can have Joseki and Atari from the first to the last move (such as in the game connect-4: If you play optimal the starting player has a set of moves he can follow and the second player can only defend each time with a certain move - until the very last stone is played which gives the starting player the win)

May 22, 2017
Well, the whole problem is hard. But the point is that actual intelligence learns from defeats and is not a static target. Humans have excelled at this, at least compared to our competing species. It's the metaprogramming that gives us the intelligence edge. AIs now start to program themselves in extremely narrow, artifically compartmentalized domains with some superiority, but any metaprogramming is far behind - entirely dependent on the smartest humans, and in even more confined problem niches.

antialias_physorg:
That's not as easy as it sounds.

May 22, 2017
Beyond the first metaprogramming derivative are many further layers. Humans have been intuitively good metaprogrammers, and at least often adequate at "meta^n" programming, but rarely even competent at even describing meta^n programming.

We typically degrade into merely gaming the system. Case in point is religion which meta^n programs social games at the expense of its stated goals despite their stated absolute priority.

Epistemology: a rarefied application typically declaring human cognition in terms of the latest human created machines, and so mostly obsoleted every generation or so in favor of the next machine paradigm.

Psychology: notoriously intractable and unreliable. It mostly navel gazes to provide companionship for lonely people otherwise lacking social boundaries, superficial massaging of opaque real mental contents.

AIs rely on inadequate human insight competence to usurp our metaprogramming competence.

EmceeSquared:
Well, the whole problem is hard

May 22, 2017
Is DOTA's complexity compared quantitatively to poker's somewhere?

Here's some DOTA enthusiasts discussing this point:
https://www.dotab...at-times

But for that matter, wouldn't something like Dungeons & Dragons, roleplaying with few constraints, be the most complex? Or maybe the most complex is *marriage* ;).

tmarksur:
The most complex game


May 22, 2017
Nah, Dwarf Fortress would be the most complex game.

May 22, 2017
Well, the whole problem is hard. But the point is that actual intelligence learns from defeats and is not a static target.

Since these AIs get to their skill level by playing millions of games against themselves: Yes, that is exactly how they learn.

but any metaprogramming is far behind - entirely dependent on the smartest humans

I wouldn't be too sure of that. Computer programs have already figured out stuff we haven't. The number of fields where computers beat humans is rapidly growing - and humans have no way of catching up. I'm not sure I like this, but that's the reality of it.

And I don't think the human meta-game is as far out there as you make it out to be. Brains evolved systems - with biological limits to how much knowledge they can integrate.

Dwarf Fortress would be the most complex game

But you can't really win it.

May 22, 2017
antialias_physorg:
Since


Yes, I was including AIs in "actual intelligences".

antialias_physorg:
I wouldn't


AIs have figured out stuff humans haven't. But that's not really metaprogramming. Where is their success in programming new AIs, which is metaprogramming? Or even in figuring out that they're figuring anything out, or that they're thinking, which is a first step towards (but not arriving at) the first level of metaprogramming? All I see, especially where AIs outperform any humans (not just experts), is immediate cognition, and perhaps programming, but not metaprogramming.

I wasn't entirely clear, but I was saying that humans aren't so far ahead at metaprogramming:
" Humans have excelled at this, at least compared to our competing species."

Then I gave examples of metaprogramming where we're better than AIs (so far) but not because we're that good.

Our biological limits are probably pretty high, despite our actual (low) performance.

May 23, 2017
Where is their success in programming new AIs, which is metaprogramming?

Done.
https://www.techn...oftware/

Or even in figuring out that they're figuring anything out

That would be consciousness. That's not part of AI (and also not part of the term 'intelligence'). I'd say that's moving the goalpoast.

All I see, especially where AIs outperform any humans (not just experts), is immediate cognition, and perhaps programming, but not metaprogramming.

You need to delve a bit deeper into the subject, then. Neural networks reconfigure themselves to solve tasks (that's the entire point of NN architecture...it's pretty much how brain plasticity works as well, only a lot more flexible - albeit still with far fewer neurons/connections than human brains due to technical limitations.)

at least compared to our competing species

Which species is competing with us in this?

May 23, 2017
antialias_physorg:
Done.


That MIT article mentions DeepMind, calls it "promising", but doesn't describe any success in metaprogramming:

"But [Yoshua Bengio / U Montreal] notes that so far it requires such extreme computing power that it's not yet practical to think about lightening the load, or partially replacing, machine-learning experts."

"Otkrist Gupta, a researcher at the MIT Media Lab, believes that will change."

Hasn't changed yet.

You need


I'm a Google platform developer, including for Cloud, Machine Learning and other of its cutting edge AI. I coded my first neural net in 1986 when Rumelhart and McClelland's _Parallel Distributed Processing_ came out. I haven't heard of any successes in actual metaprogramming, not anything like humans' general purpose capacities.

Which species


Many of our predator and prey species.

That would


Metaknowledge, basic consciousness, is a part of metaprogramming.

May 24, 2017
I haven't heard of any successes in actual metaprogramming, not anything like humans' general purpose capacities.

That would be a bit of a leap to transcend human capabilities in this right now - as the AI research in that area is still in its infancy. But whereas human minds are limited (one mind can just learn and do stuff until it dies and another mind has to start the whole process over - albeit being able to learn from the examples of previous publications) machine AIs are, potentially, not so limited. So I'm not holding out much illusions about human minds being superior for ever (or even much longer) in most any area. Including scientific research.

I was at a medical imaging conference this year (MICCAI) and the stuff that DNNs and CNNs can do already surpasses the diagnostic abilities of experts. Not only in image recognition but soon also in integrating patient history and external factors that would overwhelm a human physician.

May 24, 2017
antialias_physorg:
That would


Oh, I'm not saying AIs transcend human metaprogramming capabilities, just that I haven't seen any AI successes in it yet. And that since AIs therefore still depend on humans for programming, while humans aren't as good at at articulating (so programming) how that works, metaprogrammer AIs aren't coming as quickly as programmer AIs.

Meanwhile humans augment our minds with machines, including AIs. Direct interfaces for sensing are arriving (eg. hearing/seeing implants), with communications imminent, so "intermind" communications not far behind. Which also means implantable AIs (if at first by network connection).

So I don't think exclusively direct competition is coming. I think a synthesis is coming. Which is what I'd expect from a species that invented AI to extend its own intelligence. That is a metaprogramming exercise that humans seem to be moving into. And while AIs will be used to achieve it, humans can take the credit for it.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more