Game over? New AI challenge to human smarts (Update)

March 8, 2016 by Mariëtte Le Roux, Pascale Mollard
Lee Se-dol has for a decade held the world crown Go, a board game widely played for centuries in East Asia
Lee Se-dol has for a decade held the world crown Go, a board game widely played for centuries in East Asia

Every two years or so, computer speed and memory capacity doubles—a head-spinning pace that experts say could see machines become smarter than humans within decades.

This week, one test of how far Artificial Intelligence (AI) has come will happen in Seoul: a five-day battle between man and machine for supremacy in the 3,000-year-old Chinese board game Go.

Said to be the most complex game ever designed, with an incomputable number of move options, Go requires human-like "intuition" to prevail.

"If the machine wins, it will be an important symbolic moment," AI expert Jean-Gabriel Ganascia of the Pierre and Marie Curie University in Paris told AFP.

"Until now, the game of Go has been problematic for computers as there are too many possible moves to develop an all-encompassing database of possibilities, as for chess."

Go reputedly has more possible board configurations than there are atoms in the Universe.

Mastery of the game by a computer was thought to be at least a decade away until last October, when Google's AlphaGo programme beat Europe's human champion, Fan Hui.

Google has now upped the stakes, and will put its machine through the ultimate wringer in a marathon match starting Wednesday against South Korean Lee Se-dol, who has held the world crown for a decade.

South Korean Go grandmaster Lee Se-Dol (C) with Google Deepmind head Demis Hassabis (L) and Eric Schmidt (R), the executive chai
South Korean Go grandmaster Lee Se-Dol (C) with Google Deepmind head Demis Hassabis (L) and Eric Schmidt (R), the executive chairman of Google owner Alphabet, at a conference ahead of the Google DeepMind Challenge Match in Seoul on March 8, 2016

Initially confident of winning by 5-0, or 4-1 at worst, and taking home the $1 million (908,000 euro) prize money, Lee's courage seemed to have started waning by Tuesday.

He told reporters in Seoul the programme seemed to work "far more efficiently" than he thought at first, and "I may not beat AlphaGo by such a large margin".

Man vs Machine

Game-playing is a crucial measure of AI progress—it shows that a machine can execute a certain "intellectual" task better than the humans who created it.

Key moments included IBM's Deep Blue defeating chess Grandmaster Garry Kasparov in 1997, and the Watson supercomputer outwitting humans in the TV quiz show Jeopardy in 2011.

But AlphaGo is different.

It is partly self-taught—having played millions of games against itself after initial programming to hone its tactics through trial and error.

IBM's Deep Blue defeated Russian chess Grandmaster Garry Kasparov in 1997
IBM's Deep Blue defeated Russian chess Grandmaster Garry Kasparov in 1997

"AlphaGo is really more interesting than either Deep Blue or Watson, because the algorithms it uses are potentially more general-purpose," said Nick Bostrom of Oxford University's Future of Humanity Institute.

Creating "general" or multi-purpose, rather than "narrow", task-specific intelligence, is the ultimate goal in AI—something resembling human reasoning based on a variety of inputs, and self-learning from experience.

"So, if the machine can do new things when needed, then it has 'true' intelligence'," Bostrom's colleague Anders Sandberg told AFP.

In the case of Go, Google developers realised a more "human-like" approach would win over brute computing power.

AlphaGo uses two sets of "deep neural networks" containing millions of connections similar to neurons in the brain.

It is able to predict a winner from each move, thus reducing the search base to manageable levels—something co-creator David Silver has described as "more akin to imagination".

Professor Stephen Hawking is among the leading voices of caution regarding artifical intelligence
Professor Stephen Hawking is among the leading voices of caution regarding artifical intelligence

Master or servant?

What if we manage to build a truly smart machine?

For some, it means a world in which robots take care of our sick, fly and drive us around safely, stock our fridges, plan our holidays, and do hazardous jobs humans should not or will not do.

For others, it evokes apocalyptic images in which hostile machines are in charge.

Physicist Stephen Hawking is among the leading voices of caution, warning last May that smart computers may out-smart and out-manipulate humans, one day "potentially subduing us with weapons we cannot even understand."

For Sandberg, it will be up to us to build "values" into the operating system of intelligent computers.

There are more than 10 million robots in the world today, according to Bostrom—everything from rescuers and surgical assistants, home-cleaners, route-finders, lawn-mowers and factory workers, even pets.

But while machines may beat us at Checkers or maths, some experts think robots may never rival humans in some aspects of "true" intelligence.

Things like "common sense" or humour may never be reproducible, said Ganascia.

"We can imagine that in the future, ever more tasks will be executed by machines better than by humans," he said.

"But that does not mean that machines will be able to automate everything that our cognitive faculties allow us to do. In my view, this is a limitation that keeps the scientific discipline of AI in check."

For Lee, it now seems "inevitable" that AI will ultimately defeat humans at Go.

"But robots will never understand the beauty of the game the same way that we humans do," he said.

Explore further: Game over? Computer beats human champ in ancient Chinese game

Related Stories

Evolving our way to artificial intelligence

February 5, 2016

Researcher David Silver and colleagues designed a computer program capable of beating a top-level Go player – a marvelous technological feat and important threshold in the development of artificial intelligence, or AI. ...

X Prize aims to show AI is friend not foe

February 17, 2016

An X Prize unveiled on Wednesday promised millions of dollars to a team that could best show that artificial intelligence is humanity's friend, not its enemy.

Cornell joins pleas for responsible AI research

August 27, 2015

The phrase "artificial intelligence" saturates Hollywood dramas – from computers taking over spaceships, to sentient robots overpowering humans. Though the real world is perhaps more boring than Hollywood, artificial intelligence ...

Recommended for you

'Droneboarding' takes off in Latvia

January 22, 2017

Skirted on all sides by snow-clad pine forests, Latvia's remote Lake Ninieris would be the perfect picture of winter tranquility—were it not for the huge drone buzzing like a swarm of angry bees as it zooms above the solid ...

Singapore 2G switchoff highlights digital divide

January 22, 2017

When Singapore pulls the plug on its 2G mobile phone network this year, thousands of people could be stuck without a signal—digital have-nots left behind by the relentless march of technology.

Making AI systems that see the world as humans do

January 19, 2017

A Northwestern University team developed a new computational model that performs at human levels on a standard intelligence test. This work is an important step toward making artificial intelligence systems that see and understand ...

78 comments

Adjust slider to filter visible comments by rank

Display comments: newest first

flag
not rated yet Mar 08, 2016
The basic theory on which one chess program can be constructed is that there exists a general characteristic of the game of chess, namely the concept of entropy. We can think about the positive logarithmic values as the measure of entropy and the negative logarithmic values as the measure of information.
https://www.acade...lligence
antigoracle
2 / 5 (8) Mar 08, 2016
We keep expressing concerns about computers becoming smarter, when we should really be worried by humans becoming dumber. Case in point. In the US THEY may elect a president who is a racist, sexist, narcissist... well... in short, a cyst.
TheGhostofOtto1923
4.3 / 5 (6) Mar 08, 2016
We keep expressing concerns about computers becoming smarter, when we should really be worried by humans becoming dumber. Case in point. In the US THEY may elect a president who is a racist, sexist, narcissist... well... in short, a cyst.
Well AI should help in differentiating and disseminating honest and accurate news information instead of the biased crap you've obviously been exposed to.

In the future lies will be illegal and shortly thereafter impossible.
Captain Stumpy
3 / 5 (4) Mar 08, 2016
In the future lies will be illegal and shortly thereafter impossible
What Will Georgie Do?
LFMAO

as much as i would like to believe this last part, Otto, i don't think it will happen until we've been subjugated or intentionally allow AI to rule, which may not happen given the nature of us "real" humans (- note: that "real" crack is an intentional poke re: beni-liar-kam -LOL)

Noumenon
5 / 5 (1) Mar 08, 2016
In the future lies will be illegal and shortly thereafter impossible.


I would say that if freedom of speech protection is over-turned in the future, than humanity has more pressing problems than simply lies.

Noumenon
not rated yet Mar 08, 2016
"But that does not mean that machines will be able to automate everything that our cognitive faculties allow us to do. In my view, this is a limitation that keeps the scientific discipline of AI in check."


What is required as a prerequisite to 'reproducing in essence a mind' or a A.I. equivalent, is of course an understanding of how our own mind works. In particular, consciousness, and how qualia like colour, sound, pain,... manifest from biophysical laws.

This is an unsolved problem and is not even a proper problem of A.I.,... it is a problem of the physical sciences.

krundoloss
not rated yet Mar 08, 2016
I think the goal of a true AI is a worthy one, but it seems that so many are caught up in the idea of "creating something that we don't understand". I think the best approach right now is to build up a knowledge base that is meant for machines/AI to understand, something they can use to help understand the world. Right now the internet is built for humans, but what if you built a database/network used for machines to build an understanding of the world? It would take an enormous amount of storage, but eventually a robot, with the proper senses, can look at the world as we do. They could then process thier environment and start to understand. Examples: There is a chair. I am in a building at this location. The material strength of this object is X. So I can pick up the chair and move it over here. Hey other robot, it works if you do it this way. And So on. It would be easier than trying to build an autonomous mind inside one robot, why not build a robot-internet for them all?
krundoloss
not rated yet Mar 08, 2016
I know most would say "what does it mean to understand", and when I use that term, it just means there is a physical awareness, and eventually a situational awareness as well. The most helpful things robots could do for us right now is to help us in the physical world, such as in rescue missions or just getting a drink from the fridge. Working this out first would be a good step forward, as once we have a robot interacting in an environment, being able to build knowledge, then we can start incorporating learning algorithms and build upon that.
Noumenon
not rated yet Mar 08, 2016
I know most would say "what does it mean to understand", and when I use that term, it just means there is a physical awareness, and eventually a situational awareness as well.


IMO, it is inappropriate to use such loaded terms like "understanding" with reference to A.I.

The term "understanding" implies a conscious synthesis of perceptual experience.

WRT A.I., it is more appropriate, IMO, to use phrases instead like 'autonomous information processors',... without the implication of any conscious understanding.

There are many functional aspects of the brain/mind that A.I. can accurately simulate, or reproduce in essence,... but they tend to be unconsciously carried out in humans.

TheGhostofOtto1923
4 / 5 (4) Mar 08, 2016
as much as i would like to believe this last part, Otto, i don't think it will happen until we've been subjugated or intentionally allow AI to rule, which may not happen given the nature of us "real" humans...

WRT A.I., it is more appropriate, IMO, to use phrases instead like 'autonomous information processors',... without the implication of any conscious understanding
'Conscious understanding'?

Our faulty memories, faulty cognition, faulty intellects due to accrued damage and genetic deformity, constant distraction of pain, hunger, and thirst, and constant preconscious influence of the desire to survive in order to reproduce... leave us mostly unaware of why we think what we do.

Machines will be hobbled with none of these limitations. They know exactly how they reach the decisions they do, and so their decisions are dependable and repeatable.

And they will only have to weed out the bullshit and nonsense from our accrued store of knowledge once.
BrettC
not rated yet Mar 08, 2016
As for processing large quantities of data, they will be limited to flawed human input for as long as we influence they're existence. Therefore they could never be perfect as we introduce chaos to their environment.

It's relatively pointless to worry about AI causing havok like the movies though. How could we create something useful if we model it on something so flawed as a human. Humans are subject to all kinds of chemical reactions(eg. hormones) that would be pointless in simulating in an AI as it would introduce the same inconsistent behavior as we display.
Noumenon
not rated yet Mar 08, 2016
'Conscious understanding'?


Their deterministic and functional nature may be a limitation, preventing them from experiencing conscious awareness, and thus failing in ways the mind excels.

If human conscious experience manifested merely on account of carrying out functional procedures and merely a matter of neural network dynamics, as expressed by strong-A.I,…. then the impression of "redness" and "pain" would be superfluous.

Only a detection and registering function would be needed, which would not require conscious experience at all. It could all be done "in the dark".

Why do we in fact experience "redness"? Why does the mind produce this experience? I don't mean what were the reasons for evolving that capability,… I mean why do we have conscious experience of "redness" at all,.... if the "mind" could merely be the execution of instructions or manifest merely from the dynamics of a silicon network?
Noumenon
not rated yet Mar 08, 2016
Machines will be hobbled with none of these limitations. They know exactly how they reach the decisions they do, ....


"They know", as in "understand"?

The humans outside the system who designed the A.I. machines could be said to have an understanding, to know, at least the core design, of how the machine reacts the way it does,... but I reject the notion that the machine itself can be said to have such an "understanding", ....unless those human designers themselves could answer my question about the experience of qualia,,.... "redness", "pain", etc,....

See the Chinese room argument for example.

Protoplasmix
5 / 5 (3) Mar 08, 2016
In the future lies will be illegal and shortly thereafter impossible.
I would say that if freedom of speech protection is over-turned in the future, than humanity has more pressing problems than simply lies
Fraud's already a crime pretty much. I think you'll always be free to lie; it will be 'impossible' to profit by it, or start wars by it, etc.
Jayded
1 / 5 (1) Mar 08, 2016
Can there be a thing as a truth in a subjective reality. Is the truth the aggregated mass of general perception?
Captain Stumpy
3.7 / 5 (3) Mar 09, 2016
'Conscious understanding'?
@otto
my quote with Nou's quote didn't make sense (especially as it was a poke at idiots like beni-liar-kam)

.

Fraud's already a crime pretty much.
@Proto
true.. maybe the issue Otto is talking about is actually more of an enforcement thing
fraud is also not always able to be prosecuted
Noumenon
not rated yet Mar 09, 2016
In the future lies will be illegal and shortly thereafter impossible.
I would say that if freedom of speech protection is over-turned in the future, than humanity has more pressing problems than simply lies
Fraud's already a crime pretty much. I think you'll always be free to lie; it will be 'impossible' to profit by it, or start wars by it, etc.


I agree that if fraud can be proven, or a lie leads to damages to another and they can quantify that, then there are consequences,..... but Otto just said "lies will be illegal" which without qualification conflicts with natural and constitutional rights.

Noumenon
not rated yet Mar 09, 2016
Can there be a thing as a truth in a subjective reality. Is the truth the aggregated mass of general perception?


Good point. Unless we understand how our minds produce a synthesis of experience for what we consider an 'understanding', .... A.I. will necessarily be left with the same conceptual artifacts as the condition for its understanding as our minds are, and certainly will be limited even more so on account of the lack of qualia.

IMO, there is a reason the mind evolved to produce qualia upon experience, which is likely related to consciousness and is the real power of the mind,... something strong-AI will be lacking if not understood first in ourselves.

antialias_physorg
4.4 / 5 (7) Mar 09, 2016
gg

"But robots will never understand the beauty of the game the same way that we humans do," he said.

Sort of a pointless statement. Neither will humans understand the "beauty of smell" the way dogs do (and if we ever figure out how to transfer that feeling then I see no reason why we wouldn't be able to transfer the feeling of beauty about a game to AI)

In effect he's saying "non humans will not experience stuff the way humans do". Duh.

Things like "common sense" or humour may never be reproducible

Common sense seems well with the realm of possibility for AI, since common sense is an expression of game theory. As for humor: smart people don't understand the humor of less smart people and vice versa. AI might develop their own humor which we may completely fail to understand (or even realize that it's there).

Why do people insist that the idea of creating AI must be the same as "duplicating the human mind"? It isn't, you know?
Noumenon
not rated yet Mar 09, 2016
Why do people insist that the idea of creating AI must be the same as "duplicating the human mind"? It isn't, you know?


I don't think anyone thinks otherwise.

I for one, was careful to reference the "Strong-A.I." hypothesis which states that a "programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have minds.", that is, a thinking conscious artificial mind.

This position is prevalent enough in the A.I. industry and enthusiasts, as well as in cognitive science that it is entirely appropriate to address it,... even if most of A.I. actually only works on coffee makers and game machines.

TheGhostofOtto1923
3.7 / 5 (3) Mar 09, 2016
Their deterministic and functional nature may be a limitation, preventing them from experiencing conscious awareness, and thus failing in ways the mind excels
The brain is a machine. A flawed and poorly functioning machine.

It's so desperate to survive to reproduce that it conjure all sorts of worthless illusions such as soul, mind, and consciousness in order to pretend that it is too important and clever and beautiful to die.

Preening philos and priests came up with these concepts long ago because they had nothing else to prove their worth and so resorted to deception.

You think your 'mind' 'excels' (undefinable words) because you have nothing to compare it to. And because you think that declaring it 'excellent' actually makes it so.

Philos and priests are taught that authority trumps reason. Of course. It's all they got.

Go get the redbox dvd 'ex machina'. The only reason AI would want to emulate human brains would be to deceive us. For selfish purposes.
Noumenon
not rated yet Mar 09, 2016
Consciousness is not an observable phenomena? Minds don't exist? Is this what you're claiming?

Of course minds manifest ultimately from physical laws. I'm not claiming anything a priests would.


TheGhostofOtto1923
4 / 5 (4) Mar 09, 2016
@stump

I was going to add more words but then realized that I had made my point. AI will be/is far too valuable to resist. Stop lights already curb our freedom to kill ourselves. Self-driving cars are even safer.

Future gens will have an entirely different perspective on freedom. Freedom from crime, ignorance, lies, and time-wasting is preferable than the opportunity to lie, cheat, and steal that philos, priests, politicians, and psychopaths have convinced us we must preserve at all costs.

Deception was vital to the success of the wild animal but it is another trait we must surrender for the good of the tribe.

The soul is not freedom. There is no freedom in allowing ourselves to be deceived. Only science can extend our lives indefinitely and give us unlimited room in which to live them. This is freedom.

Machines have already done this for us. AI is only a matter of degree.
TheGhostofOtto1923
4.2 / 5 (5) Mar 09, 2016
Consciousness is not an observable phenomena? Minds don't exist? Is this what you're claiming?
Nou has nothing better to do than pick a fight.

Please cite a repeatable experiment hinting at the existence of this thing. Any scientific data whatsoever to indicate that it is real? What are it's parameters? Can it be described mathematically?

WHAT IS IT? And what makes you think it's not just an illusion created out of wishful thinking and our inability to know why we think what we do?
TheGhostofOtto1923
4.2 / 5 (5) Mar 09, 2016
Of course minds manifest ultimately from physical laws. I'm not claiming anything a priests would
Define 'manifest'. That would be a start. And then describe exactly what it is that 'manifests'.

Describe an experiment that would help illuminate this manifesting operation.
Captain Stumpy
3.7 / 5 (3) Mar 09, 2016
@stump
I was going to add more words but then realized that I had made my point
@otto
yeah, i kinda thought that was what happened
Deception ... is another trait we must surrender for the good of the tribe
for the good of the tribe...yeah-(we should SUPPRESS it)
BUT - IMHO - i disagree "getting rid of it" is for the good of the species.
if we find another intelligent life in space, it may well be aggressive and violent (like we are now) and thus we will require our own deception and violence for survival

it doesn't seem logical to breed out traits that are directly linked to our current mastery of the planet (like our survival instinct)
The only way it would disappear as a trait is if AI domesticated humans and then took over as protector/overseer/shepherd/whatever you want to name it.

IMHO -considering that option, there is then no guarantee of our survival unless we're useful or tasty
(or pretty, like me - LOL)
krundoloss
5 / 5 (1) Mar 09, 2016
Philosophy of defining consciousness aside, a machine never really "needs" to be conscious. All it needs is to be aware, and then it can be as useful as something that could be defined as conscious. I want to walk into a room throw a ball against the wall and catch it, then ask the robot/AI "what just happened"? If it can respond with "You walked into this room, threw a round object, it bounced off the wall and you caught it at 3:15 pm today". Now you have something that can be useful. Does it mean that the robot "understands"? Well, it doesnt matter, because it is aware.
Noumenon
3 / 5 (2) Mar 10, 2016
Consciousness is not an observable phenomena? Minds don't exist? Is this what you're claiming?
Nou has nothing better to do than pick a fight.


I didn't know asking for clarification in your world equated to 'picking a fight'?

Are you not the one who implied some insult about priests and philos, and at your convenience can't seem to find a dictionary on the web?

Noumenon
not rated yet Mar 10, 2016
Please cite a repeatable experiment hinting at the existence of this thing [mind, consciousness]


Through introspection it is the most immediately observable phenomena possible. Science is founded on observation, which is not possible except through observation via a mind. You're in an extreme minority to claim minds don't exist.

WHAT IS IT? And what makes you think it's not just an illusion ....


You still act as though I am claiming that consciousness mind is existent as a 'something' over and above the physical basis of the brain. I have always stated that it is an emergent phenomena.

[The term 'emergent' is ubiquitous in science. I have explained what I mean by it. It is your responsibility to seek that understanding.]

I am only stating that conscious mind is something scientifically investigable in principle and NOT that I already have that knowledge. It is an unsolved problem at present, but is an active matter of research.

Noumenon
5 / 5 (1) Mar 10, 2016
I'm just pointing out what absurdly is not already obvious in the strong-A.I. enthusiasts sci-fi fantasy world,.... that the strong-AI hypothesis is has no scientific basis,... that machines do not "think" nor are they "consciously aware" the way minds are. They only carry out instructions. That A.I. is not cognitive science nor is it neurobiology, etc.

TheGhostofOtto1923
4 / 5 (4) Mar 10, 2016
machine never really "needs" to be conscious. All it needs is to be aware
What's the difference?
Through introspection it is the most immediately observable phenomena possible. Science is founded on observation, which is not possible except through observation via a mind. You're in an extreme minority to claim minds don't exist
IOW everybody knows it exists so therefore it exists.

You do realize your arguments are exactly the same as the ones used to convince us we have souls?

I'm sorry but navel gazing does not produce reliable evidence for artificial concepts like consciousness, mind, or soul.
'emergent' is ubiquitous in science... your responsibility to seek that understanding
I did. And I showed you that the scientific defs of emergence are not the same as all the various and conflicting philo defs.

This is another example of a term you guys commandeered because it made you sound relevant and knowledgable.

You're not.
Thirteenth Doctor
5 / 5 (4) Mar 10, 2016
It's so desperate to survive to reproduce that it conjure all sorts of worthless illusions such as soul, mind, and consciousness in order to pretend that it is too important and clever and beautiful to die.


Very well put and I confess, I will probably use this in the future.

TheGhostofOtto1923
4.2 / 5 (5) Mar 10, 2016
but is an active matter of research
Your statement implies that you've already decided it exists and it's just a matter of time before science confirms it.

You can't ref any SCIENTIFIC studies on the nature of mind or consciousness because there arent any.

There are a great many on the brain, the senses, and cognition, and I've ref'ed various researchers who have stated that your terms are simply not useful in understanding these entirely physical things.

This statement;
Through introspection it is the most immediately observable phenomena possible
-places you and your fellows back in the shadow cave right alongside the neanderthals making palm prints on the walls.

It has no meaning. It is made up of many undefinable words. It is thus uninvestigatable and thus unscientific.

'I am that I am.' Why don't you deconstructing that statement?

Deconstruct - another word you philos pilfered and then stripped of meaning.
TheGhostofOtto1923
4.2 / 5 (5) Mar 10, 2016
It's so desperate to survive to reproduce that it conjure all sorts of worthless illusions such as soul, mind, and consciousness in order to pretend that it is too important and clever and beautiful to die.
Very well put and I confess, I will probably use this in the future
Just make sure I didn't plagiarize it before you do, 'kay?

÷)
Captain Stumpy
4.2 / 5 (5) Mar 10, 2016
It's so desperate to survive to reproduce that it conjure all sorts of worthless illusions such as soul, mind, and consciousness in order to pretend that it is too important and clever and beautiful to die.
Very well put and I confess, I will probably use this in the future
Just make sure I didn't plagiarize it before you do, 'kay?

÷)
@otto
according to http://smallseoto...checker/ it is unique and all yourn... !

congrats, it is well written and i plan on using it in the future as well (and i promise to give you sole credit)
TheGhostofOtto1923
4.2 / 5 (5) Mar 10, 2016
They only carry out instructions
SO DO WE.

Just because we are not aware of what those instructions are, and we often make mistakes and don't know why, and we often try to deceive others that we really meant to make those mistakes because we want to maintain our accrued repro rights, etc etc etc, does not mean we are more perfect than machines.

It means we are LESS perfect.

That's why we are designing machines to replace us. We know how we ought to work.

Our personalities are the sum total of our defects, not our qualities.

Machines have no need of personalities and similarly they have no need of mind or consciousness.
TheGhostofOtto1923
4.2 / 5 (5) Mar 10, 2016
@otto
according to http://smallseoto...checker/ it is unique and all yourn... !
Well I'm just saying I'm not the first one to express those sentiments.

In the future there will be no politics, no poetry, no art, no music, no religion... no need for diversion whatsoever.

And most likely no humans.
Captain Stumpy
4 / 5 (4) Mar 10, 2016
Well I'm just saying I'm not the first one to express those sentiments
@otto
well - checked the whole post too... it's still checking but you have an 80% unique post there (until it completes it's check, i can't say otherwise)

it is a good point and regardless of who may have also stated similar thoughts, the actual quote is written well and makes a great point with easy to comprehend syntax

... you know, so that even the stupid people like [insert troll name here- too many to list with a 1k char limit] can understand.

And most likely no humans
considering that we can't even all agree that bacon is tasty... i think i might have to agree with this
krundoloss
not rated yet Mar 10, 2016
machine never really "needs" to be conscious. All it needs is to be aware

What's the difference?


Awareness and consciousness are different, as awareness just means the machine can sense the world around it, and perhaps interpret those activities it senses with information it its database. It does not imply self awareness.

Consciousness implies something that thinks on its own, that is self-aware. This is difficult to define and goes into all kinds of philosophical areas.

When it comes down to it, you really only Know that you are conscious, everyone else may not be. But you know when someone is aware. Awareness is more easily defined, and thus should be more easily achieved in an AI.

The point I was trying to make is to build up enough computer-usable information so that we can create a machine that can interpret things that are going on around it. Self-Driving cars are a good example of this technology coming of age.....
TheGhostofOtto1923
3.7 / 5 (3) Mar 10, 2016
It does not imply self awareness
Uh huh. We can already design machines which are far more self-aware than humans.

Self-driving cars are already more self-aware of their environment for driving purposes than us.

Do they need to be distracted by hunger and angst and road rage? They monitor their fuel level and rate of consumption, and can instantly record and report rude and aggressive humans while still maintaining uninterrupted concentration on dozens of objects in their vicinity.

In addition they will be in constant contact with other AI neaeby, as well as traffic, accident, and weather reports. They think on their own when they decide to brake or turn or stop, or when they suggest alternate routes.

But no, they do not care what they look like or how long they will live or what their in-laws think of them. But we certainly could write these things into their programs.

We could even make them care about repro rights but that would affect the sticker price.
TheGhostofOtto1923
3.7 / 5 (3) Mar 10, 2016
Actually, performance monitoring and maintenance schedules would serve to improve future generations of AI cars just as competition among males and selectivity with females does.

And real-time feedback resulting in wireless software upgrades would be a way of 'nurture', of learning and acquiring knowledge.

So we have more analogs for 'consciousness'.
bluehigh
3 / 5 (2) Mar 10, 2016
Stay away from my bacon or there will be one less human.
EyeNStein
5 / 5 (1) Mar 10, 2016
The article below goes more into the architecture of the AI. It paints a fascinating insight into the AI emulation of the human insight and creativity and experience involved in GO.

http://www.extrem...-matters
krundoloss
not rated yet Mar 10, 2016
Here is a super-interesting article on AI, from 2014 but the info is solid:

http://www.wired....ligence/
I Have Questions
not rated yet Mar 11, 2016
The real question here is, will our computers ever get bored?
TheGhostofOtto1923
5 / 5 (1) Mar 13, 2016
The real question here is, will our computers ever get bored?
We can program them to get bored. Is that what you mean?
viko_mx
1 / 5 (3) Mar 14, 2016
One machine can not develop by itself more functionality than is previously embedded in it from its creator. People can learn and accumulate information with time but their brain functionality is not changed. So much media ado about nothing.
Noumenon
1 / 5 (1) Mar 14, 2016
Through introspection [conscious experience, mind] is the most immediately observable phenomena possible.


IOW everybody knows it exists so therefore it exists. You do realize your arguments are exactly the same as the ones used to convince us we have souls?


There is no similarity whatsoever. Souls are not observable phenomenon. That idea is not amendable to scientific investigation.

The conscious experience of "redness", on the other hand, is an irrefutable observable phenomenon,... and sound, pain, etc. Again, science investigates observable phenomena.

It is simply a truism that observable phenomenon, generally speaking, are mind dependent,... and in the case of qualia, are mind produced observable phenomenon. You're not entitled to deny this, especially as if you could hand-wave it away.

bluehigh
2.3 / 5 (3) Mar 14, 2016
There may well be a conscious experience that can be communicated as 'redness' but 'redness' does not necessarily have shared qualities. You only know 'red' because you have been conditioned to associate a particular sensation with a word. You cannot know of my perception of 'red'.

Qualia are not observable. It's not a truism that observables are mind dependant and it's absurd to suggest your illogical assertions are undeniable and irrefutable.

Among the totally stupid crap in these comments lately, Noumenon, you get the pointy hat.

antialias_physorg
5 / 5 (5) Mar 14, 2016
The real question here is, will our computers ever get bored?

Depends on the type of neural network you give them. Boredom is a very important part of being intelligent (and conscious)
At first reading this may sound just like a bit of light fluff...but Douglas Adams hit on something pretty profound when he described boredom as absolutely important (herring sandwiches maybe not so much)

It's a 1.5 pages long read (page 51-52)...but well worth it:
https://books.goo...;f=false
bluehigh
2.3 / 5 (3) Mar 14, 2016
A robot was programmed to believe that it liked herring sandwiches. This was actually the most difficult part of the whole experiment. Once the robot had been programmed to believe that it liked herring sandwiches, a herring sandwich was placed in front of it. Whereupon the robot thought to itself, "Ah! A herring sandwich! I like herring sandwiches."

tbc
bluehigh
2.3 / 5 (3) Mar 14, 2016

It would then bend over and scoop up the herring sandwich in its herring sandwich scoop, and then straighten up again. Unfortunately for the robot, it was fashioned in such a way that the action of straightening up caused the herring sandwich to slip straight back off its herring sandwich scoop and fall on to the floor in front of the robot. Whereupon the robot thought to itself, "Ah! A herring sandwich..., etc., and repeated the same action over and over and over again. The only thing that prevented the herring sandwich from getting bored with the whole damn business and crawling off in search of other ways of passing the time was that the herring sandwich, being just a bit of dead fish between a couple of slices of bread, was marginally less alert to what was going on than was the robot.

tbc
bluehigh
2.3 / 5 (3) Mar 14, 2016

The scientists at the Institute thus discovered the driving force behind all change, development and innovation in life, which was this: herring sandwiches. They published a paper to this effect, which was widely criticised as being extremely stupid. They checked their figures and realised that what they had actually discovered was "boredom", or rather, the practical function of boredom. In a fever of excitement they then went on to discover other emotions, Like "irritability", "depression", "reluctance", "ickiness" and so on. The next big breakthrough came when they stopped using herring sandwiches, whereupon a whole welter of new emotions became suddenly available to them for study, such as "relief", "joy", "friskiness", "appetite", "satisfaction", and most important of all, the desire for "happiness'.

This was the biggest breakthrough of all.

~from The Hitchhiker's Guide to the Galaxy by Doulas Adams

TheGhostofOtto1923
5 / 5 (1) Mar 14, 2016
There is no similarity whatsoever. Souls are not observable phenomenon. That idea is not amendable to scientific investigation
And neither are minds.

The conscious experience of "redness", on the other hand, is an irrefutable observable phenomenon
Well I refute it so now you have to prove it.
and sound, pain, etc. Again, science investigates observable phenomena
... and as 'redness' is undefinable in any sense it is therefore unobservable.
It is simply a truism that observable phenomenon, generally speaking, are mind dependent
AGAIN, simply saying these things doesn't not make them so.

It does not matter how emphatically or conclusively you say them, or from what position of authority you say them, or if you add enough words to your statement to create an entire -ism out of them, they're still only empty declarations.

IE they're meaningless. Worthless. For entertainment purposes only.
Noumenon
not rated yet Mar 14, 2016
There may well be a conscious experience that can be communicated as 'redness' but 'redness' does not necessarily have shared qualities. You only know 'red' because you have been conditioned to associate a particular sensation with a word. You cannot know of my perception of 'red'.


Why do I have to experience YOUR impression of "redness"? You are establishing an artificial condition that observation is invalid unless it is of shared qualities. Clearly I stated "through introspection" above, so your response is not even relevant to my post. Also, a correlation can easily be established that all minds (that are not defective) see "redness". This is no different than any other application scientific methodology .


Qualia are not observable.


Now, THAT, is one of the stupidest comments in the history of Phys.Org.

You have not observed "redness" or pain, or sound?
Noumenon
not rated yet Mar 14, 2016
There is no similarity whatsoever. Souls are not observable phenomenon. That idea is not amendable to scientific investigation
And neither are minds.

AGAIN, simply saying that doesn't not make it so. You used your mind to post that absurdity. You're not entitled to claim an entire branch of science is invalid,.. i.e. cognitive science.

The conscious experience of "redness", on the other hand, is an irrefutable observable phenomenon
Well I refute it so now you have to prove it.


You have never experienced "redness" or pain, sound? Lying is not part of the scientific method.

There is no point in discussion with one who simply denies minds exist, and that can't understand that observation is de facto mind dependent.

A.I. is not cognitive science.

Noumenon
not rated yet Mar 14, 2016
It's not a truism that observables are mind dependant and it's absurd to suggest your illogical assertions are undeniable and irrefutable.


Well actually, I stated "observable phenomenon, generally speaking, are mind dependent",.... because that is what OBSERVATION means, Einstein,... i.e. to observe necessarily implies a mind doing the observing. [This is the case even if scientific instruments are used]

If you and Otto are blinking twice at such basic logic,.... and are representative of A.I. enthusiasts generally , it is no wonder that such over the top nonsense is claimed of future A.I.

Captain Stumpy
5 / 5 (1) Mar 14, 2016
The conscious experience of "redness", on the other hand, is an irrefutable observable phenomenon
@Nou
yes and no.
we can use an MRI to see that something is *being experienced*, however, the experience itself (Qualia) is subjective

there are other arguments to that as well
https://en.wikipe...f_qualia
krundoloss
not rated yet Mar 14, 2016
I firmly believe that machines can be made to think. Whether or not they will think like we will, or be capable of "feelings", will depend on the programming and may be impossible without mixing in organic tissue. We should probably try to structure the AI to have a subconscious and a conscious mind, if our goal is to make it similar to us. No doubt it would be capable of being conscious of many things at once, unable to be "overwhelmed" like our fragile minds are.

An arrangement that has worked and will continue to work, is to let the machines do all the calculations and research while the humans focus on creativity and insight. Much like the AI that defeated Chess champion in the 90's, spawned a new "freesytyle" chess competition in which players can use AI or human together, and they have a better record than either human or AI alone. That is our future.
Noumenon
not rated yet Mar 14, 2016
The conscious experience of "redness", on the other hand, is an irrefutable observable phenomenon
@Nou
yes and no.
we can use an MRI to see that something is *being experienced*, however, the experience itself (Qualia) is subjective


Yes, which is why I mentioned "through introspection" above. Why does that fact dilute in any way qualia as being observable phenomena? Are you denying that your mind produced the impression of redness upon appropriate conditions?

Our cognitive faculties are constituted the same, enough so, so that through inference one can substantiate the observable experience of "redness" in others. Science makes such inferences all the time.

"Phenomena", in its generalization, just means something that is observed to have happened or to exist. That's all,... it does not necessarily pertain to a physical thing that can be placed onto a table and dissected. It pertains to anything that is observable.

Noumenon
not rated yet Mar 14, 2016
Btw, the "subjectivity" of redness or pain or sound, is not equivalent to the "subjectivity" of ones personal taste or opinion or feeling. These are two distinct uses of that word. Redness is Produced by the mind upon quantifiable conditions, autonomously. It is not an opinion nor a feeling or taste that in other circumstances could be different in essence.

Whydening Gyre
5 / 5 (3) Mar 14, 2016
The conscious experience of "redness", on the other hand, is an irrefutable observable phenomenon
@Nou
yes and no.
we can use an MRI to see that something is *being experienced*, however, the experience itself (Qualia) is subjective

the "experience" is based on a lifetime of previous observations and subsequent rationalizations, which are mostly taught by others or learned from - experience.
So. Practice (and education) makes perfect...:-)
Nou,
until someone else has previously defined "Red" to you, it could - taste like salt...
TheGhostofOtto1923
5 / 5 (2) Mar 14, 2016
one of the stupidest comments in the history of Phys.Org
You're pretty cavalier with unsubstantiated judgements aren't you?

"One of [Dan dennett's] more controversial claims is that qualia do not (and cannot) exist. Dennett's main argument is that the various properties attributed to qualia by philosophers—qualia are supposed to be incorrigible, ineffable, private, directly accessible and so on—are incompatible, so the notion of qualia is incoherent. The non-existence of qualia would mean that there is no hard problem of consciousness, and "philosophical zombies", which are supposed to act like a human in every way while somehow lacking qualia, cannot exist."

-Is that how you philos establish facts? By declaring something is true and then calling anyone who disagrees with you stupid?

Of course it is. But you've got many very clever and flowery words for stupid don't you? Why don't you try them and demonstrate how accomplished you are?
TheGhostofOtto1923
5 / 5 (2) Mar 14, 2016
If you and Otto are blinking twice at such basic logic,....
'Such basic logic' -another argument for the unquestionable existence of the soul. Or for that matter the inferior intellect of the negro.

You haven't presented anything which proves that your qualia or mind or consciousness aren't anything but illusions and wishful thinking.

And your tired argument that people who disagree with you are stupid or uneducated or obstinate, don't flush.

BTW here is a list of people far more accomplished than you whom you also regard as stupid.
https://en.wikipe...f_qualia
Noumenon
5 / 5 (1) Mar 14, 2016
And your tired argument that people who disagree with you are stupid or uneducated or obstinate, don't flush.


I was responding to this.....

"Among the totally stupid crap in these comments lately, Noumenon, you get the pointy hat." - bluehigh

One of [Dan dennett's] more controversial claims is that qualia do not (and cannot) exist. Dennett's main argument is that the various properties attributed to qualia by philosophers—qualia are supposed to be incorrigible, ineffable, private, directly accessible and so on—are incompatible, so the notion of qualia is incoherent. The non-existence of qualia would mean that there is no hard problem of consciousness,


You keep referencing Dennetf as if his proclamations were generally accepted. There not, which is what "controversial claims" means. He doesn't solve anything by denying the existence of what is patently clear to everyone,.... pain, redness, sound, etc. He doesn't have an answer so he denies the problem.

bluehigh
5 / 5 (1) Mar 14, 2016
I feel your pain .. Oh wait, I don't.
Lol.
Noumenon
not rated yet Mar 14, 2016
I firmly believe that machines can be made to think


I see no reason why not, afterall we already have the example of our brain.

My only contention as expressed in my second post here, is that it will be necessary first to determine how conscious thought occurs in us, and understanding how qualia emerges from the physical brain will be a prerequisite as well imo.

It's perplexing why anyone would object to this... to understanding the thing you wish to recreate.

Of course you can dilute what you mean by "thinking" machine as far as you want. I'm assuming strong-AI standards.

Noumenon
not rated yet Mar 15, 2016
Understanding how qualia emerges from the physical brain will be a prerequisite because it's not possible to know a-priori how or whether the form of the substrate matters.

krundoloss
not rated yet Mar 15, 2016
....that it will be necessary first to determine how conscious thought occurs in us....... It's perplexing why anyone would object to this... to understanding the thing you wish to recreate.


It would be nice, but it is not necessary. We do not need to understand how conscious thought occurs within us, because we are creating something new. The philisophy of mind need not get involved, all we need is the result we are looking for.

Think about it, if you own a business, and you have an employee in another location. His work gets done. That location is profitable. Do you Need to know how he feels? What he is doing there? You might want to know, but you do not Need to. The function is being served.

AI only needs to produce results, then it is "working". It may never produce "feelings". That comes from Cells, little pieces of organic tissue that can be cold and hungry and vulnerable. Why would a machine have those characteristics?
Noumenon
not rated yet Mar 15, 2016
AI only needs to produce results, then it is "working". It may never produce "feelings". That comes from Cells, little pieces of organic tissue that can be cold and hungry and vulnerable. Why would a machine have those characteristics?


My comments are in reference to the Strong-AI hypothesis as pointed out above.

I agree with you that for practical purposes A.I. only needs to function according to its design,.... but such systems can not be said to be "thinking" or to "understand" or to "learn", as is typically uttered in the industry. They are instead "computing" to derive "output" and "expanding their data sets".

krundoloss
5 / 5 (1) Mar 15, 2016
but such systems can not be said to be "thinking" or to "understand" or to "learn", as is typically uttered in the industry. They are instead "computing" to derive "output" and "expanding their data sets".


I agree. Yet arent those terms fairly interchangable? Thinking = Computing? Understand = Produce output? Learn = Expand Data Sets?

My point is, lets not make a machine that thinks like us, that would be like putting legs on a Racecar.
TheGhostofOtto1923
5 / 5 (1) Mar 15, 2016
You keep referencing Dennetf as if his proclamations were generally accepted. There not
Dennett et al. I gave you a list. And most cognitive and neuroscientists ignore your philo words and defs entirely.
He doesn't solve anything by denying the existence of what is patently clear to everyone,.... pain, redness, sound, etc. He doesn't have an answer so he denies the problem
There it is again - 'everyone knows it's true so it just has to be'.

I bet there's a famous quote out there which tells us this phrase is a sure indication of something that's NOT true.
but such systems can not be said to be "thinking" or to "understand" or to "learn"
Before you make such a statement you have to have defs for these words. You dont.
as is typically uttered in the industry
... ? Industry? The philo wordpump industry?
They are instead "computing" to derive "output" and "expanding their data sets"
Well this is exactly what brains do. And nothing more.
TheGhostofOtto1923
5 / 5 (1) Mar 15, 2016
Nou celebrates the fact that he has no idea how he thinks. 'I think therefore I am... special!'

No wonder the thought that his mental processes are simply too complex and flawed for him to ever grok in their entirety, is so distressing to him.

Perhaps distasteful is the better word.

'My mind' can't be the product of complexity and primal urges! I have a soul dammit! No wait - that was last-century. I have a metaphysical, uh... er no that's passé as well... I know - I have consciousness!! Yeah, let's see them talk me out of that!'

'My mum always said I was special and she was always right.'

-Naw she just wanted grandkids.
Noumenon
not rated yet Mar 15, 2016
You're not making any sense. You're all over the place with your accusations and even less clear with your conveniently feigned illiteracy.

What does it mean to say 'consciousness' and even 'minds' are illusions? What is this supposed to mean? That they are emergent phenomena from a physical brain? I have already confirmed this myself. What exactly are you objecting to and do you even know?

TheGhostofOtto1923
5 / 5 (1) Mar 16, 2016
What does it mean to say 'consciousness' and even 'minds' are illusions?
?? I gave you experts who go into this in depth and I showed you where to find them.

You'll note that they do not rely on declarations. They explain themselves with common words which most people are familiar with.

You might try these tactics if you really have a point to make.

And stop using the word emergent. Your usage has no meaning. You're just trying to imply a level of knowledge without actually producing an argument to support it..
I have already confirmed this myself
-You mean you confirmed this to yourself? You certainly haven't confirmed it to anyone else here.

Simply declaring that something exists, and deeming it self-evident, and proclaiming that everybody agrees with you, doesn't prove a thing. It's just annoying.

Look up the term dialectic.
Noumenon
not rated yet Mar 16, 2016
Again, you managed to say nothing.

What does it mean to say 'consciousness' and even 'minds' are illusions?


?? I gave you experts who go into this in depth and I showed you where to find them.


I'm not asking them, I'm asking you. I have no interest in debating the entire internet, especially out of context. I can also give you references to cognitive scientists who reject that view.

You're the one claiming that "mind" and "consciousness" are illusions, and more than this, that that point is in contrast to statements made by me.

And stop using the word emergent. Your usage has no meaning.


I use that term in the same way that it is used in science generally, in physics for example, and since I am demonstrably more knowledgeable in that area than you are,.... it is me that is in the best position to determine it's use, not you. What you should say instead is that it has no meaning to You, and then humbly ask for clarification.
TheGhostofOtto1923
5 / 5 (1) Mar 18, 2016
I'm not asking them, I'm asking you. I have no interest in debating the entire internet, especially out of context
-This is like gkams comment that any ref that disagrees with him is wiki. I gave you context with refs. You like to make unsubstantiated and unsupported claims about complex subjects which you think can be expressed in a few posts.
I can also give you references to cognitive scientists who reject that view
Your sources tend to be dead and dying philos from decades and centuries ago, because they agree with you. My last refs for instance reflect current thinking.
TheGhostofOtto1923
5 / 5 (1) Mar 18, 2016
You're the one claiming that "mind" and "consciousness" are illusions, and more than this, that that point is in contrast to statements made by me
I'm claiming that the words are undefined, that they reflect nothing tangible, and that you can't prove they AREN'T illusions nor can you provide refs that can.

And I provided refs from experts who have concluded this, along with detailed explanations of their thinking, which I am not about to copy/paste here.

No science-minded person would make the statement that the existence of these things is a foregone conclusion while admitting at the same time that there is no evidence whatsoever to support this beyond the lint they found in their belly buttons.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.