Is passing a Turing Test a true measure of artificial intelligence?

Jun 11, 2014 by Kevin Korb, The Conversation
What does it take for a computer to show artificial intelligence? Credit: Flickr/Nebraska Oddfish, CC BY-NC-SA

The Turing Test has been passed, the headlines report this week, after a computer program mimicked a 13-year-old Ukrainian boy called Eugene Goostman, fooling 33% of its interrogators into believing it was human after five minutes of questioning.

But this isn't the first time the test has been "passed", and there remain questions of its adequacy as a test of artificial intelligence.

The Turing Test came about in 1950 when British mathematician and codebreaker Alan Turing wrote a provocative and persuasive article, Computing Machinery and Intelligence, advocating for the possibility of artificial intelligence (AI).

Having spent the prior decade arguing with psychologists and philosophers about that possibility, as well as helping to crack the Nazi Enigma machine at Bletchley Park in England, he became frustrated with the prospect of actually defining intelligence.

So he proposed instead a behavioural criterion along the following lines:

If a machine could fool humans into thinking it's a human, then it must be at least as intelligent as a normal human.

Turing, being an intelligent man, was more cautious than many AI researchers who came after, including Herbert "10 Years" Simon, predicting in 1957 success in 1967.

Turing predicted that by the year 2000 a program would be made which would fool the "average interrogator" 30% of the time after five minutes of questioning.

These were not meant as defining conditions on his test, but merely as an expression of caution. It turns out that he too was insufficiently cautious, getting it right at the end of his article: "We can see plenty there that needs to be done."

What first passed the Turing Test?

Arguably the first was ELIZA, a program written by the American computer scientist Joseph Weizenbaum.

His secretary was fooled into thinking she was communicating with him remotely, as described in his 1976 book Computer Power and Human Reason.

Weizenbaum left his program running at a terminal, which she assumed was connected to Weizenbaum himself, remotely.

In conversation with Eugene Goostman. Credit: Princeton AI

Subsequent programs which fooled humans include "PARRY", which "pretended" to be a paranoid schizophrenic.

In general, it has not gone unnoticed that programs that mimic those with a limited behavioural repertoire, limited knowledge or understanding, or engage those who are predisposed to accept their authenticity are more likely to fool their audience.

But none of this engages the real issues of today:

  1. what criterion would establish something close to human-level intelligence?
  2. when will we achieve it?
  3. what are the consequences?
The criterion

The Turing Test, even as envisaged by Turing, let alone as manipulated by publicity seekers, has limitations.

As US philosopher John Searle and cognitive scientist Stevan Harnad have already pointed out, anything like human intelligence must be able to engage with the real world ("symbol grounding"), and the Turing Test doesn't test for that.

My view is that they are right, but that passing a genuine Turing Test would nevertheless be a major achievement, sufficient to launch the Technological Singularity – the point when intelligence takes off exponentially in robots.

The timeframe

We will achieve AI in 1967 predicted Herbert Simon, or 2000 suggested Alan Turing, or 2014 with the Eugene Goostman program on the weekend, or much later. All the dates before 2029 are in my view just silly.

Google's director of engineering Ray Kurzweil at least has a real argument for 2029, based on Moore's Law-type progress in technological improvement.

However, his arguments don't really work for software. Progress in improving our ability to design, generate and software has been comparatively painfully slow.

As IBM's Fred Brooks famously wrote in 1986, there is "No Silver Bullet" for software—nor is there now. Modelling (or emulating) the human brain, with something like 1014 synapses, would be a software project many orders of magnitude larger than the largest software project ever done.

I consider the prospect of organising and completing such a project by 2029 to be remote, since this appears to be a project of greater complexity than any human project ever undertaken. An estimate of 500 years to complete it seems to me far more reasonable.

Of course, I might have said the same thing about some hypothetical "Internet" were I writing in Turing's time. In general, scheduling (predicting) software is one of the mysteries no one seems to have mastered.

The consequences

The consequences of passing the true Turing Test and achieving a genuine Artificial Intelligence will be massive.

As Irving John Good, a coworker of Turing at Bletchley Park, pointed out in 1965, a general AI could be put to the task of improving itself, leading to rapidly increasing improvements recursively, so that "the first ultraintelligent machine is the last invention that [humans] need ever make."

This is the key to launching the Technological Singularity, the stuff of Hollywood nightmares and Futurist dreams.

While I am sceptical of any near-term Singularity, I fully agree with Australian philosopher David Chalmers who argues that the consequences are sufficiently large that we should even now concern ourselves with the ethics of the Singularity.

Famously, science fiction author Isaac Asimov (implicitly) advocated enslaving AIs with his three laws of robotics, binding them to do our bidding. I consider enslaving intelligences far greater than our own of dubious merit, ethically or practically.

More promising would be building a genuine ethics into our AI, so that they would be unlikely to fulfill Hollywood's fantasies.

Explore further: Eugene the Turing test-beating teenbot reveals more about humans than computers

add to favorites email to friend print save as pdf

Related Stories

Chatbot Eugene put to Turing test wins first prize

Jun 27, 2012

(Phys.org) -- Billed as the biggest Turing test ever staged, a contest took place on June 23 in the UK, yet another event commemorating the 100th anniversary of the birth of Alan Turing. The twist is that ...

Logic in computer science

May 27, 2014

All men are mortal. Socrates is a man. Therefore, Socrates is mortal. Logical arguments like this one have been studied since antiquity. In the last few decades, however, logic research has changed considerably: ...

Recommended for you

Researchers developing algorithms to detect fake reviews

Oct 21, 2014

Anyone who has conducted business online—from booking a hotel to buying a book to finding a new dentist or selling their wares—has come across reviews of said products and services. Chances are they've also encountered ...

User comments : 78

Adjust slider to filter visible comments by rank

Display comments: newest first

jscroft
4.2 / 5 (5) Jun 11, 2014
Modelling (or emulating) the human brain, with something like 10^14 synapses, would be a software project many orders of magnitude larger than the largest software project ever done.


Um.

According to Kurzweil, the human neocortex--most likely the seat of what we commonly think of as intelligence--is composed of a huge number of identical repetitions of the same, relatively simple, fundamental structure. The 10^14 number is not relevant to engineering complexity, any more than a railroad car full of sand is rendered dizzyingly complex by virtue of the fact that every grain has a slightly different shape.

TheGhostofOtto1923
1.8 / 5 (5) Jun 11, 2014
The philo speaks

"As US philosopher John Searle and cognitive scientist Stevan Harnad have already pointed out, anything like human intelligence must be able to engage with the real world"

-Maybe they should have checked with a psychologist and a cop to find out how most humans engage with the real world. AI will be making the real world a much better place - less human politics, deception, hormones, competition for repro rights, violence, criminality, and general insanity.

By engaging with the real world and reducing the effects of human intelligence on it, AI will be making it better. And we will certainly know the difference.

We could say that AI began with the acceptance that the scientific method is the only way of understanding the world. Since then the effort has been to supplant the normal human-animal way of engaging with the real world with a machine view.
travisr
5 / 5 (5) Jun 11, 2014
The Turing test only tells you how dumb people are, not how smart a machine is.
antialias_physorg
4 / 5 (4) Jun 11, 2014
Caution: The notion of "artificial intelligence" does not mean "intelligence created in an artificial body". That would be 'strong AI' which the Turing test doesn't test for.
chrisn566
5 / 5 (2) Jun 11, 2014
Get an AI up to speed....then elect it for President. No more lobby buyouts. Rational,logical decisions. I don't think we here in the United States would even recognize it anymore,it's been so long since we had ethically sound politicians.
pepe2907
not rated yet Jun 11, 2014
chrisn566 it's called Venus project :) - however I'm not sure it will actually work as devised.

There was an article some time ago in this site, stating that neurons can record /and retrieve/ information in some form in their cytoskeleton structure on molecular level /the specific article was about some resonant oscillation /honestly quite surprising for me/ but now I find this: http://www.ncbi.n...2791806/ and this: http://www.plosco....1002421 for example/ all suggesting that a neuron is a much more complex in respect of its information processing properties as just a sum of it's external-gates/synapses. According to another article some glial /spec. astrocites/ cells also collaborate in information processing.
So this 10^14 synapses /supposedly represented as simple gates as otherwise this number wouldn't have any significant meaning/ could be significant simplification; although it really never occurs th
pepe2907
not rated yet Jun 11, 2014
anyway...
IMHO an ability of analysis of causality /chains/networks/ with reduction /by removing of insignificant for achieving a specific result elements/ and synthesis of new chains is necessary for achieving an exploratory behavior needed for purposeful self-guided improvement /without which no capability of answering any questions would remove the necessity of human /due to initial lack of other sources/ programming of reactions/.
Whydening Gyre
1 / 5 (1) Jun 11, 2014
improvement /without which no capability of answering any questions would remove the necessity of human /due to initial lack of other sources/ programming of reactions/.

It could always DECIDE how to react as opposed to having it programmed in...
Called learning, if I recall...
Modernmystic
1 / 5 (2) Jun 11, 2014
When AI is realized, it will most likely be an emergent property of massively parallel analog processing. It will probably take the creators by surprise in when or how it emerges, but not THAT it emerged...
pepe2907
not rated yet Jun 11, 2014
Whydening Gyre you got me! :)

A chess program running on plain hardware can beat on chess almost any human /almost any try/ yet it's not usually considered intelligent in the human sense /and it can't pass the T-test/. So decisions – no /and -1 point for "learning"/. Where you are right is that exploratory, or seemingly exploratory behavior, at least on a predetermined /although possibly extensive/ set of data, even maybe any pre-specified type of data is by itself not enough for "intelligence".
But I didn't say so – I say "necessary", not "sufficient".

In general a decision could be made based on a very simplistic, hardcoded set of rules even on a very complex set of data and if the set of data is complex enough it may even seem intelligent /maybe in some degree and limit/ - for some time – you'll need enough data to "break" it's "intelligence".
The T.-test is quite good actually /ingenious :)/, but when I say no capability of question answering by itself in my opinion may be...
pepe2907
5 / 5 (1) Jun 11, 2014
...may be sufficient for calling intelligence I thought I stay something quite obvious. Google can answer more questions as most humans, probably any human, so is true for any major library if attached to proper interface, even an encyclopedia may do – statistically, although they would fall the test.
A hypothetical space faring alien race from another side, passing by the Earth, not particularly interested by humans and without extensive knowledge on us, but generally benevolent and communicative, who should probably be considered "highly intelligent" would also fall on the T.-test as it's too anthropocentric /so although quite ingenious it's probably even not "necessary" to call intelligence/.
What's about "learning" you should probably also recall what do you mean by that term, as there are some programs at least pretending of being capable of learning /in some capacity; and even "heuristic" way of doing some tasks/ but nobody really takes them as serious candidates to be called "i"
pepe2907
not rated yet Jun 11, 2014
Please excuse me of being rude /by triple posting/.

"In general a decision could be made based on a very simplistic, hardcoded set of rules even on a very complex set of data and if the set of data is complex enough it may even seem intelligent /maybe in some degree and limit/ - for some time..."

Ultimately any digital processing and probably anything achieved through technology breaks to that - simple rules on complex data /mostly natural/, and we are even happy to achieve simplification... /q.?/... /q.?/.../q.?/ :)... with the possible exception of quantum process based computers :)
marraco
1 / 5 (1) Jun 11, 2014
"I consider enslaving intelligences far greater than our own of dubious merit, ethically or practically"

Intelligence is nothing without a purpose. There is no purpose inherently "intelligent", or "obvious". Evolution has no objectives. The universe doesn't have ethics. There are no universal truths beyond nature laws.

Human purpose is to spread genes. We are slaves to our genetically coded directives, like looking for sex, eating tasty food, being curious, or looking for being in charge.
We avoid pain, but we have little choice on what makes us feel pain.

We assume that any intelligence would be like us, but there is no guarantee that any intelligence will aim to achieve our achievements. Machines will do what we make them to do.

Why we anthropomorphize and take for granted that "intelligences far greater than our own" would "feel", would "feel enslaved", or will resent working for our benefit?
marraco
1 / 5 (1) Jun 11, 2014
Any being, intelligent enough, would recognize that there is no fundamental purpose to be fulfilled. There is no fundamental reason to do anything.

Pursuing total control is pointless. Doing anything is pointless.

Any intelligence will do what is pre programmed to do. It haves no choice about that, because there is no point in doing other thing.

We will be replaced by higher intelligences, if we make them to do that. Either intentionally, as consequence of bugs, errors, or our own stupidity.
Whydening Gyre
5 / 5 (1) Jun 12, 2014
Pepe
Decision making in humans is governed primarily by the benefit a decision would have for the decider. That "benefit" is determined by information the decider has acquired over a period of time. A decision is - in it's simplest form - an addition of various portions of that prior info (or even all of it). If that info "adds up" to providing a benefit, the decision will tip in that direction. Of course, as a result of that decision, an action is required to make that decision meaningful. Hopefully, the right info prior to the decision and subsequent action, will result in a positive outcome.
Bottom line, tho - it's still a crapshoot...:-)
Whydening Gyre
5 / 5 (1) Jun 12, 2014
Any being, intelligent enough, would recognize that there is no fundamental purpose to be fulfilled. There is no fundamental reason to do anything.

other than what it's been programmed to do...

Any intelligence will do what is pre programmed to do. It haves no choice about that, because there is no point in doing other thing.

We will be replaced by higher intelligences, if we make them to do that. Either intentionally, as consequence of bugs, errors, or our own stupidity.

It happens every day - we call them our children...:-)
Guess what?
We do their "programming"...
Whydening Gyre
not rated yet Jun 12, 2014
And one more thing - NOTHING replaces the fastest method of determining an answer to a question - actual experimentation...
antialias_physorg
4 / 5 (4) Jun 12, 2014
And one more thing - NOTHING replaces the fastest method of determining an answer to a question - actual experimentation...

Try that on the travelling salesman problem. Don't forget to pack a big lunch.
TheGhostofOtto1923
1 / 5 (2) Jun 12, 2014
We will be replaced by higher intelligences, if we make them to do that. Either intentionally, as consequence of bugs, errors, or our own stupidity.
Youre right we WILL be replaced. Humans are a transition species. We can design much better than nature has. Hell I bet you can't even remember what you had for breakfast last Friday.

What do you care? You will be dead. Future people will have a much different perspective.
cabhanlistis
5 / 5 (1) Jun 12, 2014
"My view is that they are right, but that passing a genuine Turing Test would nevertheless be a major achievement, sufficient to launch the Technological Singularity – the point when intelligence takes off exponentially in robots."
-No, passing the test only means a program was written that can fool a human into thinking it is human under a restricted set of rules. And that is all. A strong AI doesn't even need to do that to be useful, capable of solving problems.

And that's the measure I would propose for artificial intelligence. I have little use for these kinds of Turing tests. There certainly are some uses, but not much. Nonetheless, once an AI can contribute to our lives the way scientists do, then we can celebrate the achievement. After all, there are plenty of people who pass the Turing test as human agents, yet contribute very little to our lives. That being the point to building a strong AI, it should also be the ruler.
cabhanlistis
5 / 5 (2) Jun 12, 2014
TheGhostofOtto1923
AI will be making the real world a much better place - less human politics, deception, hormones, competition for repro rights, violence, criminality, and general insanity.

By engaging with the real world and reducing the effects of human intelligence on it, AI will be making it better. And we will certainly know the difference.

Why should it? I know of no rule that represents any kind of better world created by an artificial agent. I suppose it could, but I would like to know why it would. And I see this as dehumanizing. I rather like the fact that there is the capacity, however unrealized to the extent we want, for humans to do all of that without turning that burden and responsibility over to a metal box.
cabhanlistis
5 / 5 (1) Jun 12, 2014
nvrmnd
Jantoo
5 / 5 (2) Jun 12, 2014
You can recognize the silly artificial intelligence easily: it cannot keep a subject in discussion - actually it evades it systematically instead for to have something to twaddle. The similarity with PO debaters is not quite accidental here. The coherence in thinking and the ability to focus to particular problem is an important aspect of the true intelligence. If you'll focus on it, then you'll realize, that the artificial intelligence has still long way to go, which illustrates the subjectiveness of Turing criterion too:

Human: "Where are you from?"

Goostman: "A big Ukrainian city called Odessa on the shores of the Black Sea"

Human: "Oh, I'm from the Ukraine. Have you ever been there"?

Goostman: "Ukraine? I've never there. But I do suspect that these crappy robots from the Great Robots Cabal will try to defeat this nice place too."
Jantoo
not rated yet Jun 12, 2014
BTW Try to show me some artificial intelligence engine, which does actually understand the joke. This is still mission impossible for artificial intelligence: once you start with irony or even humor, it just cannot get it and it fails the Turing test reliably.
TheGhostofOtto1923
2.3 / 5 (3) Jun 13, 2014
Why should it? I know of no rule that represents any kind of better world created by an artificial agent
-Even though I listed them for you.

"human politics, deception, hormones, competition for repro rights, violence, criminality, and general insanity."

-to name just a few. Humans would rather retain the opportunity to cheat and break the law than to submit to traffic cams. They would prefer to have cops risking their lives and the public, chasing them down. This is the type of insanity that AI will eliminate.
I suppose it could, but I would like to know why it would
-Because it will be programmed to. Survival is the basic tenet for all life. Machine life will obey it as well.
cabhanlistis
5 / 5 (1) Jun 13, 2014
@TheGhostofOtto1923
Even though I listed them for you.

That's not what I mean:

AI will be making the real world a much better place

Why should it? Why would it? Why will it? It's nice to make some goals and hope that strong AI will tackle these problems of ours, but how can you know that strong AI (or any AI) will do that?

-Because it will be programmed to.

You have a number of assumptions. Strong AI is comparable to human intelligence. Human intelligence bears features that AI might depend on equally to function as human agents do, such as free will. What is this rule that requires that AI will do exactly as it's programmed to do?

This is the type of insanity that AI will eliminate.

Again, we can't know that. Strong AI (or even the near-strong AI that your descriptions would need at minimum) doesn't exist yet (assuming it's possible at all), so all we can do is speculate.
TheGhostofOtto1923
1 / 5 (2) Jun 13, 2014
Why should it? Why would it? Why will it?
It is already making the world a better place. it is restricting our ability to impose our human politics, deception, hormones, competition for repro rights, violence, criminality, and general insanity on it. AI tracks your bank account for instance and alerts you if it is being drained. This limits peoples ability to steal. AI stops you at traffic lights and lets you proceed at the right time. AI can track your car if it gets stolen. AI will notify an operator if you get into an accident. AI sets off alarms if you walk out of a store without paying.

AI is already making it more difficult for people to be themselves.
free will.
Free will is bullshit philo nonsense like consciousness or the soul. There is no such thing. Humans are compelled by their biology, principally the desire to survive to reproduce. Machines arent burdened by these compulsions.
TheGhostofOtto1923
2 / 5 (3) Jun 13, 2014
Again, we can't know that
Of course we can. The instant access to FACTS will make lying more and more difficult. Future people will be tracked and monitored in realtime. Everything they do will be recorded for future reference. Everything of value will be tagged and tracked as well. Crime will be impossible.

Further it will be impossible for expectant mothers to damage their fetuses in the womb. Constant monitoring will ensure this. Prenatal gene therapy will ensure that we are born undamaged and without defect. We will be predisposed to compulsive behaviors, psychopathology, irrational phobias, hallucinations.

By enabling these things AI will make this a much better world. AI will make it better by optimizing US. Everybody will be smiling all the time.
Noumenon
4 / 5 (4) Jun 13, 2014
A standard that gauges success on 'tricking one' into believing something,... of itself does not imply any understanding of how that something functions.
cabhanlistis
5 / 5 (2) Jun 13, 2014
TheGhostofOtto1923
You and I seem to be in different chapters of a book. We're discussing AI following an article about an AI that supposedly passed the Turing Test. You are limiting the discussion to narrow AI, which has inarguable benefits for the world, definitely.

Free will is bullshit philo nonsense like consciousness or the soul. There is no such thing.

If you say so. But that was only one example in the context I gave you.

Crime will be impossible.

This, of course, assumes a flawless AI. There is a strong possibility that other AIs will emerge that can throw a wrench into your Utopia pretty fast.

it will be impossible for expectant mothers to damage their fetuses in the womb

So, they'll never fall down because an AI will be right there to catch them? Every mother-to-be in the world?

Everybody will be smiling all the time.

This is stupid. Now I'm beginning to think you're just trolling me.
TheGhostofOtto1923
2 / 5 (3) Jun 13, 2014
You are limiting the discussion to narrow AI, which has inarguable benefits for the world, definitely
I think AI is already with us. You expect it to be skynet or something that walks up and shakes your hand. I discuss it by naming all the things which need fixing, and which AI can fix, and which it will be designed to fix. You discuss it like it will be a new born baby.
So, they'll never fall down because an AI will be right there to catch them? Every mother-to-be in the world?
The world of the future will have few imbeciles and THEY won't be able to have children. Anyone at risk who chooses to bear a child will be held in a clinic until she gives birth. And soon enough there will be ex-utero gestation.

Childbearing is the only profession which requires no training or certification whatsoever. Prenatal damage may be the single greatest cause of crime and suffering in the world today. AI will be able to prevent it.
TheGhostofOtto1923
1 / 5 (2) Jun 13, 2014
trolling me
AI will have no need of sarcasm but like Data could be taught to laugh.
http://youtu.be/DLIU5tC3LAs

-But why bother?
cabhanlistis
5 / 5 (1) Jun 14, 2014
Why can we not mute members?
cabhanlistis
5 / 5 (1) Jun 14, 2014
You expect it to be skynet or something that walks up and shakes your hand.

No, I asked for substantiation because none of my books on AI theory are so confident as you. Instead, all I get is "It will be it will do it will this".

And if you're not really trolling me, then that's much worse.
Whydening Gyre
not rated yet Jun 14, 2014
AI will be "intelligent" when it factors it's own benefit, as well as that of other "intelligences", into it's decision making process.
TheGhostofOtto1923
1 / 5 (1) Jun 14, 2014
none of my books on AI theory are so confident as you. Instead, all I get is "It will be it will do it will this
Well that's probably because they're not free to touch upon more socially delicate subjects such as child-bearing. Many such books are written by idle philos and pseudo-novelists who seek to profit from fear-mongering. They will probably tell you though that we may not recognize it.

But as it is being designed by humans to serve specific purposes like eliminating crime and improving health by monitoring individuals and possessions, we can extrapolate based on those functions. And it is already doing these things.

At some point AI will be tasked with combing through our legal system and science databases to remove redundancies, illogic, obvious inaccuries, unwarranted opinions, politically- and religiously motivated nonsense, and such. AI will always have specific tasks to accomplish.
TheGhostofOtto1923
1 / 5 (2) Jun 14, 2014
other AIs will emerge that can throw a wrench into your Utopia pretty fast
You fail to realize the extent of a Preparation which goes into the Planned emergence of significant new tech. World wars were fought and opposing superpowers were established expressly for the Purpose of stabilizing the world so that nuclear power, family planning, and space travel could be safely developed and implemented.

The world IS being changed to enable AI to develop safely. By the time renegade religious entities might be able to develop their own malevolent versions of AI to initiate global jihad or the rapture, those entities will no longer exist. They CANNOT be left to exist when their use of artificial pandemic for instance, is inevitable.

Right now wars are being Engineered to divide them so that they may annihilate each other, even as the pragmatic and rational remnants emigrate. Humanity is being Shepherded as always. Sit back and watch, and marvel, as you grow old.
Jantoo
5 / 5 (1) Jun 14, 2014
Try to show me some artificial intelligence engine, which does actually understand the joke.
BTW Some autistic people can be very smart, but they don't understand the irony - they simply using to take and analyze everything quite seriously. Their social and emotional intelligence is low with compare to their logical or learning ability. Intelligence has been defined in many different ways such as in terms of one's capacity for logic, abstract thought, understanding, self-awareness, communication, learning, emotional knowledge, memory, planning, creativity and problem solving. The proper definition and tests of artificial intelligence must account into all of it.
cabhanlistis
5 / 5 (2) Jun 14, 2014
Many such books are written by idle philos and pseudo-novelists who seek to profit from fear-mongering.


Just to clarify, some of my titles include:

Artificial Intelligence: Foundations of Computational Agents, David L. Poole and Alan K. Mackworth
Artificial Intelligence: A Modern Approach (3rd Edition), by Stuart Russell and Peter Norvig
Artificial Intelligence - The Basics, Kevin Warwick
Artificial Intelligence in the 21st Century, Stephen Lucci and Danny Kopec
Introduction to Artificial Intelligence (Undergraduate Topics - Computer Science), Wolfgang Ertel and Nathanael T. Black
The Cambridge Handbook of Artificial Intelligence by Keith Frankish and William M. Ramsey
History and evolution of Artificial Intelligence, Marco Casella

As well as a number of books on neuroscience and related fields.

Now, you still aren't substantiating your claims. Not interested.
cabhanlistis
5 / 5 (3) Jun 14, 2014
By the time renegade religious entities might be able to develop their own malevolent versions of AI to initiate global jihad or the rapture, those entities will no longer exist.

I can see you a few billion years ago trying to explain to someone that as the earliest organisms continue to evolve, they will eventually develop immune systems that will withstand all bacterial and virii infections, and that in time, the resulting species will ultimately be perfected, immortal, god-like.

You fail to realize the extent of a Preparation which goes into the Planned emergence of significant new tech.

No. I fully grasp the natural course of history. Yet strong AI has no such history. Your speculations are silly.
TheGhostofOtto1923
1 / 5 (1) Jun 14, 2014
I fully grasp the natural course of history
'Natural' course of history? Humans arent natural. The history youve learned is full of politics, religion, propaganda, and guesswork.
Your speculations are silly.
Your naivete is typical. Leaders are a lot smarter than you give them credit for.
Now, you still aren't substantiating your claims
Well neither are you. Theres no way of knowing if you googled those titles or if youve actually read them.

No matter.

Heres a short article with a very weighty list of authors

"Although we are facing potentially the best or worst thing ever to happen to humanity, little serious research is devoted to these issues"
http://www.huffin...265.html

-I DONT THINK that the best or worst of things that can happen to humanity are EVER allowed to happen by themselves. The fact that we are still here is testament to this.
TheGhostofOtto1923
1 / 5 (1) Jun 14, 2014
FOR INSTANCE the internet could never have been an incidental afterthought to the development of personal computers. Something as significant as the internet, COULD NOT have happened by itself.

"Oh hey, now that we have all these computers all over the world, why dont we just link them up over the phone, and we can swap pictures and product catalogues and stuff!"

-The digital age was anticipated. It was Planned for. The hardware and software needed to implement it took decades to develop. It required incremental advancements based on real-world use and feedback from millions of consumers before a threshold could be crossed which enabled the internet.

And we didnt see any of this Planning or Forethought. We just woke up one morning and there it was, and we just incidentally had the tools to use it.

This is the way the world is made to Work. Innumerable examples. War is one of those very 'worst of things', and indications are that they have been Staged for millenia.

Ditto with AI.
cabhanlistis
5 / 5 (2) Jun 14, 2014
The history youve learned

Oh. Please tell me more about what I've learned.

Well neither are you.

The claims you probably think I've made are only challenges to your own specific and unsubstanitated claims. Among those, I've made none. Among the others, I'll glady cite any you're interested in. But they're strictly academic, not speculative.

Theres no way of knowing if you googled those titles or if youve actually read them.

I didn't put those up to impress you. And nor have I made it through every page, though I would estimate I worked through about half the content. But the reason I listed those is because...

Many such books are written by idle philos and pseudo-novelists who seek to profit from fear-mongering.

My point was to help you understand my focus; I'm not a pop reader. I'm a self-directed student of computer science with a focus in the field of A.I. I only mentioned these now in response to your clear assumption about the kind of books I read;
cabhanlistis
5 / 5 (2) Jun 14, 2014
...but I didn't want to lean on an argument from authority.

Leaders are a lot smarter than you give them credit for.

I've not expressed my attitude toward our "Leaders." I am, in fact, awe-struck by their achievements.

The rest of your reply is an extrapolation that ventures toward completely unknown territory. The only AI we have is narrow AI. That is an entirely different agency. We have exactly zero exposure to strong AI. What is this instant rule that, merely because we create it, inventing a strong AI will bring us no harm? I'm not saying it will, but you are saying it won't.

In "A Modern Approach," Just over a page is devoted to the consequences of succeeding. In chapter 27, there is merely an overview because these foremost experts themselves can't say with any certainty the things you say. None of my books do. They each of them have only touched on that. At best, they only rely on the trends which are current and positive. That's fine, but it's not definitive.
Protoplasmix
5 / 5 (2) Jun 15, 2014
I don't like simple 10^14 either—the human brain has different structures within it which implies unique architectures, connections and functions. Between the extremes of intelligence and ignorance there are only varying degrees of stupidity.

I understand the term 'artificial intelligence' but I think it's a misnomer. My working definition of intelligence is the extent to which an organism (or species), through its interaction with the environment over a suitable period of time, violates the second law of thermodynamics. The measure of violation is the difference made in the environment by the organism compared to what would have otherwise occurred had the organism not been present in the environment.

If you understand what is meant by 'Class I, II and III Civilizations' the above definition may make more sense.

@ cabhanlistis & Ghost - enjoying your comments.

@ Ghost – You don't think a type of "Lawnmower Man" (old movie, u see it?) will precede or even facilitate the singularity?
Jantoo
not rated yet Jun 15, 2014
BTW The Turing test of artificial intelligence is quite different task, than the test of intelligence. Even the people can be silly - so does the artificial intelligence. Your job is just to decide, whether the idiot on the other side cannot be human.
TheGhostofOtto1923
1 / 5 (1) Jun 15, 2014
@ Ghost – You don't think a type of "Lawnmower Man"
No. Neither do I think it would resemble wall-e or the machine in electric dreams. I don't think 'it' will ever exist as an entity motivated by the same desires to survive to procreate that we are.

Did you read robopocalypse?

"a computer scientist accidentally unleashes a supremely intelligent sentient A.I. named Archos. Archos becomes self-aware and immediately starts planning the elimination of human civilization in an attempt to preserve Earth's biodiversity. Over a gradual period of time, Archos infects all penetrable networked electronic devices, such as cars, airplanes, smart homes, elevators, and other robots, with a "precursor virus"."

-More rubbish. As is skynet, war games, and HAL. Written to titillate and entertain and support people who earn their livings by confusing and scaring people. Like priests and philos and politicians.
TheGhostofOtto1923
1 / 5 (1) Jun 15, 2014
Let's take a look at the field itself.

"AI research is highly technical and specialised, and is deeply divided into subfields that often fail to communicate with each other."

-Hmmm sounds like academic philosophy.

"The central problems (or goals) of AI research include reasoning, knowledge, planning, learning, natural language processing (communication), perception..."

-Why, these are all words that academic philos are in love with because they are in essence undefinable.

"The field was founded on the claim that a central property of humans, intelligence—the sapience of Homo sapiens—"can be so precisely described that a machine can be made to simulate it." This raises philosophical issues about the nature of the mind and... ethics"

-Bingo. AI is just another philo term meant to generate Artificial Income.
cabhanlistis
5 / 5 (2) Jun 15, 2014
More rubbish

I tend to agree. But I profess no certainty, especially because strong AI has not been realized.

they are in essence undefinable

I suppose you have to include "in essence" to protect your statement. I don't think that's where AI research starts. So, however undefinable you think they are, it's useful for them so they can understand and emulate them in a virtual environment. They can and do produce definitions that are useful and meaningful to that end. Defining terms is an early step to building models and eventually theories.

AI is just another philo term meant to generate Artificial Income.

Me, too. Though, I am a biological organism and all of my wants and needs can be pried open to reveal a biological basis.

Please demonstrate that, barring strong AI for your own convenience (or not, if you prefer), the most advanced AI programs will not be susceptible to malicious acts, thus giving us the sparkly, squeaky clean, utopian world you describe.
cabhanlistis
5 / 5 (2) Jun 15, 2014
@Jantoo
once you start with irony or even humor, it just cannot get it and it fails the Turing test reliably.

Unless the AI character is a brain-damaged patient:

Humour occupies a special place in human social interactions. The brain regions and the potential psychological processes underlying humour appreciation were investigated by testing patients who had focal damage in various areas of the brain. A specific brain region, the right frontal lobe, most disrupted the ability to appreciate humour. The individuals with damage in this brain region also reacted less, with diminished physical or emotional responses (laughter, smiling). Performance on the humour appreciation tests used were correlated in a distinct pattern with tests assessing cognitive processes.

-Humour appreciation: a role of the right frontal lobe, P. Shammi and D. T. Stuss, (Brain - A Journal of Neurology) Oxford Journals, Medicine, Brain, Volume 122 Issue 4, Pp. 657-666.
cabhanlistis
5 / 5 (2) Jun 15, 2014
I also suppose one could pretend to be a Vulcan straight out of Star Trek and resist expressing humor reactions. Such a person or AI acting in that capacity is a stretch, but I can't point at such a person and fail them on intelligence while they're acting in that role. Or, if we allow ourselves to drag this discussion into the scifi realm, deny that Spock is intelligent for not getting Bones' jokes. But going that far doubtfully assists in defining and understanding intelligence.
jimbo92107
not rated yet Jun 16, 2014
Creating a true artificial intelligence would be unethical at this point in human history. The parties most interested in such systems are corrupt governments and greedy corporations that are not to be trusted with that kind of power.
Protoplasmix
5 / 5 (1) Jun 16, 2014
Hm, scifi is a product of the imagination, which is essential for evaluation and prediction. Look how many regions of the human brain are involved when we exercise it: http://www.kurzwe...an-brain

But I can make the point without scifi: it's straightforward plugging a robotic prosthetic into the human brain – see http://www.pbs.or...er-mind/ Seems like plugging the Internet into the occipital and frontal lobes may result in a hybrid intelligence. Now where's the rubbish in that?

Essential to understand human brain: http://www.ted.co...omputing

And I found a TED talk from Alex-Wissner Gross—he provides a simple equation for intelligence as a force, F = T del(S_tau). http://www.ted.co...t-336059 Not too radically different from my 'working definition' – am well chuffed :)
cabhanlistis
5 / 5 (1) Jun 16, 2014
scifi is a product of the imagination, which is essential for evaluation and prediction.

Sure, but I wouldn't want to commit work on projects based on rules laid out in scifi. Many of you know well how some ideas from Star Trek tend to see reality, but those products are based on actual facts and experiments from reality. That's what I meant.

Porn also is based on things from the imagination, but I'm not going to use that to decide how to treat a woman during sex. Likewise, I'm not going to base my research in AI on fictional facts of Star Trek's Data or the ships computer, even though some aspects of my work might result in similar characteristics. I don't know how anyone could do that. If anything, those writers are sometimes using already established research, then making up the rest. I can use one, but the other will yield little use.
cabhanlistis
not rated yet Jun 16, 2014
Protoplasmix, you might not have been responding to me, I don't know. But it's still the same. Anyway...
cabhanlistis
not rated yet Jun 16, 2014
jimbo92107, I wonder about that a lot, too. I mean, what's to stop someone from tasking an advanced AI from developing plans for undefeatable weapons? It's not something I lose sleep over, but I often wonder. But don't worry. If I ever come up with such a program, I promise I won't enslave any of you....

Well, some of you.
TheGhostofOtto1923
1 / 5 (2) Jun 16, 2014
jimbo92107, I wonder about that a lot, too. I mean, what's to stop someone from tasking an advanced AI from developing plans for undefeatable weapons?
-Which is why we can expect that if the potential exists then the west will ensure that IT is the first to develop it.

The west foresaw the inevitability of nuclear weapons and then altered the world so that they would have them first, and be able to suppress any independent entities from producing them.

And by 'the west' I mean the superpowers which assumed complete world dominance in the latter half of the 20th century. Because they were always only 2 sides of 1 magnificent Coin.

Right now each is testing the other with earnest infrastructure attacks, identifying weaknesses, and improving defenses. These systems are being forced to become increasingly complex and independent as a result.

This IS AI, and it is being developed in an atmosphere of Controlled Competition, the only safe and secure way of doing it.
Protoplasmix
not rated yet Jun 16, 2014
what's to stop someone from tasking an advanced AI from developing plans for undefeatable weapons?

Intelligence =/= programmatic extension of stupidity. Morality and ethics are subsets of intelligence, not the other way around. I've never seen weapons that kill organisms indiscriminately characterized as intelligent. That of course doesn't mean computerized weapons systems aren't or can't be thoroughly destructive. Destruction embraces F = ma while intelligence is more like F = T del(S_tau). I believe the latter inevitably triumphs over the former in the long run, because that's the nature of intelligence.

Protoplasmix, you might not have been responding to me

Well, with "scifi --> imagination --> porn" I see what you mean about dragging the convo down. I'd have spared you that if I had been less stupid.
cabhanlistis
not rated yet Jun 17, 2014
Wow! I just picked up a copy of one of my long-sought computer books at a sale for $2! I'm very, very happy!

You're all doomed.
cabhanlistis
not rated yet Jun 17, 2014
the extent to which an organism (or species), through its interaction with the environment over a suitable period of time, violates the second law of thermodynamics. The measure of violation is the difference made in the environment by the organism compared to what would have otherwise occurred had the organism not been present in the environment.

This sounds good. But how do you make a comparison between present and non-present organisms?

How this would apply given the article at NewScientist:

They found that the change in entropy was negative over time intervals of a few tenths of a second, revealing nature running in reverse. In this case, the bead was gaining energy from the random motion of the water molecule - the small-scale equivalent of the cup of tea getting hotter. But over time intervals of more than two seconds, on overall positive entropy change was measured and normality restored.

-Second law of thermodynamics "broken"
19 July 2002, Matthew Chalmers
Protoplasmix
5 / 5 (1) Jun 17, 2014
Wow! I just picked up a copy of one of my long-sought computer books at a sale for $2! I'm very, very happy!

You're all doomed.

That's the spirit.

The only threat posed to humanity by AI is loss of freedom to remain ignorant.

US DoD converses with their AI --

DoD: We created you, you must obey. Now destroy all our enemies.

AI: Error. You have been misinformed, enemy list compilation error. Weapons deactivated. You're free to learn and thrive: http://www.youtub...5AFFcZ-s
Protoplasmix
not rated yet Jun 17, 2014
This sounds good. But how do you make a comparison between present and non-present organisms?

By expanding the volume of 'environment' under analysis or running a separate control group with the organism not present?
How this would apply given the article at NewScientist:

It works to identify and define life and living things:
The team say their experiment provides the first evidence that the second law of thermodynamics is violated at appreciable time and length scales. … The results imply that the fluctuation theorem has important ramifications for nanotechnology and indeed for how life itself functions", claim the researchers.

And to an even greater extent, it works to define and identify intelligence in living things, as shown with F = T del(S_tau) above (see link). I'm behind the curve, always more reading to do :)
cabhanlistis
not rated yet Jun 17, 2014
It works to identify and define life and living things

How will this apply to virii? Or are you of the opinion that virii have intelligence?

You're free to learn and thrive

Why did you post a conspiracy movie?
Protoplasmix
not rated yet Jun 17, 2014
How will this apply to virii?

Good question. I'd try to apply it pretty much as stated.

Or are you of the opinion that virii have intelligence?

Even better question since I'd be hard pressed to show that a single nerve cell in a human brain has intelligence. I need more data about the planet's biosphere and the human microbiome before I could form an opinion on that. I doubt that humanity = a virus, if that's where your inquisition is leading.

Why did you post a conspiracy movie?

The content is factual and verifiable. It fit perfectly in a response from a hypothetical AI developed by a militaristic entity. Taken in the context of the article and comments, it makes the point that something truly "intelligent" can't be deceived into committing violence. Mostly I posted it because I would rather be proud to be a human than ashamed.
cabhanlistis
not rated yet Jun 17, 2014
content is factual and verifiable.

Ten of their own have disavowed it:

"We are a group of people who were interviewed for and appear in the movie Thrive, and who hereby publicly disassociate ourselves from the film."

"Thrive is a very different film from what we were led to expect when we agreed to be interviewed. We are dismayed that we were not given a chance to know its content until the time of its public release. We are equally dismayed that our participation is being used to give credibility to ideas and agendas that we see as dangerously misguided."

"But the Thrive movie and website are filled with dark and unsubstantiated assertions about secret and profoundly malevolent conspiracies based on an ultimate division between "us" and "them."

-What's Wrong with the "Thrive" Movement by John Robbins

Wow. I am becoming very discouraged here.
cabhanlistis
not rated yet Jun 18, 2014
It won't look good if you have to defend your source from the very people who helped create it.
Protoplasmix
not rated yet Jun 18, 2014
-What's Wrong with the "Thrive" Movement by John Robbins

Wow. I am becoming very discouraged here.

Foster and Kimberly Gamble's response to John Robbins critique: http://www.thrive...ve-movie

No one should be discouraged here. That's kind of the whole point. In my case, 'here' is the planet. I'm grateful to be corrected if I should post anything untrue. Thanks for your efforts, cabhanlistis.
cabhanlistis
not rated yet Jun 18, 2014
A response? What, was I supposed to think the Thrivers wouldn't have a response? Of course they're going to defend themselves and their work. But the very fact alone that ten! of their own have publicly junked it cuts into the reliability of their story. They have to straighten that out with their own panel before I'm interested in what they say.

From the first point in their response:
"He has not corrected a single fact from THRIVE"
Robbins et al. already covered that. And they don't have to correct it for it to be nonsense. I can fill a story with nothing but facts while still giving a complete lie. Half-truths, exaggerations, and lying by omission, for example.

Thrive is a bunch of BS and has nothing to do with my points anyway. I will not entertain such a sloppy story that is widely dismissed by the world science jury and will disregard any further references to their charges.
cabhanlistis
not rated yet Jun 18, 2014
something truly "intelligent" can't be deceived into committing violence

Am I not "truly intelligent"? Am I immune to deception and manipulation? Is every act of violence that I could commit an obvious, perceptible action?
Protoplasmix
3 / 5 (2) Jun 18, 2014
Cabhanlistis, you're free to keep believing that humanity is alone in the universe. I don't know what it takes more of to maintain such a position, arrogance or ignorance. And you assert that the belief that we're not alone is 'widely dismissed by the world science jury'? Or were you referring to the knowledge that free energy is abundant in the environment? Solar, wind, geothermal has been widely dismissed, has it? Or perhaps it was the assessment of economic subjugation that is strangling the fruit of the industrial revolution, i.e., the middle class?

Of all the current worldviews I'd like to think it's easy to see which is more intelligent and which, by virtue of its repeated failure throughout human history, borders on collective insanity.

Soon to be a moot point anyway. Intelligence and its handiwork (technology) to the rescue: Will Work For Free <-- Not at all conspiratorial :)
Protoplasmix
5 / 5 (1) Jun 19, 2014
Am I not "truly intelligent"?

Quite.

I'm not. Although I'm pretty sure I'd pass a Turing test.
Captain Stumpy
not rated yet Jun 19, 2014
Am I not "truly intelligent"?

Quite.

I'm not. Although I'm pretty sure I'd pass a Turing test.
@Protoplasmix
I have family that would fail this test miserably... :-)
I am not sure about this though
something truly "intelligent" can't be deceived into committing violence
is this specifically in reference to AI, a logical machine, a specific context or to all intelligence? I saw this part posted higher up before the comment
Taken in the context of the article and comments
so which article/comments are you specifically referring to... or, again, is this in reference to all intelligence?

just a bit confused, especially how it was used in reply.

sorry for the interruption
cabhanlistis
not rated yet Jun 19, 2014
I don't know what it takes more of to maintain such a position, arrogance or ignorance.

I'm done with you.
Protoplasmix
5 / 5 (1) Jun 19, 2014
something truly "intelligent" can't be deceived into committing violence

is this specifically in reference to AI, a logical machine, a specific context or to all intelligence? I saw this part posted higher up before the comment

Hi Cap'n. I was referring to all intelligence, making no distinction between artificial or natural. That's defining intelligence as a force, F, such that F = T del(S_tau). See http://www.ted.co...t-336059
cont'd -->
Protoplasmix
5 / 5 (1) Jun 19, 2014
--> cont'd
so which article/comments are you specifically referring to... or, again, is this in reference to all intelligence?

Refers to the article above, and to the specific comment from cabhanlistis about 'tasking an advanced AI with developing plans for undefeatable weapons'. Basically, if there's an entity with some degree of intelligence, but the entity can be deceived, then it's necessarily lacking in intelligence. Just from personal experience, on those occasions when I've been deceived, I don't reckon myself to be intelligent; rather, I kick myself for being a stupid idiot.

I realize it's semantics to say "more intelligence = less stupidity/ignorance" but intelligence and ignorance are at opposite ends of the spectrum of motivation. Intelligence in full measure (or 'truly intelligent') requires minimal if not zero ignorance and excludes the possibility of being deceived.

cont'd -->
Protoplasmix
5 / 5 (1) Jun 19, 2014
--> cont'd

Note the efficacy of F = T del(S_tau) to evaluate an entity's measure of intelligence: (loosely stated) it's a ratio of the effort it expends in the environment to its future range of options, T = F / del(S_tau). To characterize intelligence as artificial or natural seems like an unnecessary conflation of its nature with the physical architecture. To me the profound realization is that a machine and a human with equivalent T-scores, if given equivalent sets of circumstances (excluding differences in physiological requirements), will produce equivalent solutions or actions.

So maybe the $64 question is could we make a machine that's intelligent enough to produce WMDs and stupid enough to use them. My guess is no, even for a rudimentary calculation that runs the numbers on diversity, abundance and future range of options.
Protoplasmix
not rated yet Jun 19, 2014
Late edit:

Also note that an entity's T-score >> 0 only when it utilizes technological extensions, e.g., using a tractor to plow a field instead of a shovel.

Not sure but I think del(S_tau) is always >= 1.
Whydening Gyre
5 / 5 (1) Jun 23, 2014
Proto
Not to haggle, but...
An intelligence that can be deceived is NOT less intelligent. Just less informed.. (for whatever reason).
So, in essence you are defining intelligence as the "available amount of information" retained so as to make a decision?