Hawking warns AI 'could spell end of human race'

December 3, 2014
Theoretical physicist professor Stephen Hawking speaks at a press conference in London on December 2, 2014

British theoretical physicist Stephen Hawking has warned that development of artificial intelligence could mean the end of humanity.

In an interview with the BBC, the scientist said such technology could rapidly evolve and overtake mankind, a scenario like that envisaged in the "Terminator" movies.

"The primitive forms of artificial intelligence we already have, have proved very useful. But I think the development of full artificial intelligence could spell the end of the human race," the professor said in an interview aired Tuesday.

"Once humans develop it would take off on its own, and re-design itself at an ever increasing rate.

"Humans, who are limited by slow biological evolution, couldn't compete and would be superseded," said Hawking, who is regarded as one of the world's most brilliant living scientists.

Hawking, who is wheelchair-bound as a result of and speaks with the aid of a voice synthesiser, is however keen to take advantage of modern communications technology and said he was one of the first people to be connected in the early days of the Internet.

He said the Internet had brought dangers as well as benefits, citing a warning from the new head of Britain's electronic spying agency GCHQ that it had become a command centre for criminals and terrorists.

"More must be done by the Internet companies to counter the threat, but the difficulty is to do this without sacrificing freedom and privacy," Hawking, 72, said.

Hawking Tuesday demonstrated a new software system developed by Intel, which incorporates to allow him to write faster. It will be made available online in January to help those with motor neurone disease.

While welcoming the improvements, the scientist said he had decided not to change his robotic-sounding voice, which originally came from a speech synthesiser designed for a telephone directory service.

"That voice was very clear although slightly robotic. It has become my trademark and I wouldn't change it for a more natural voice with a British accent," he told the BBC.

"I'm told that children who need a computer want one like mine."

Explore further: Hawking's speech software goes open source for disabled

Related Stories

Hawking's speech software goes open source for disabled

December 2, 2014

The system that helps Stephen Hawking communicate with the outside world will be made available online from January in a move that could help millions of motor neurone disease sufferers, scientists said Tuesday.

Recommended for you

Researchers find tweeting in cities lower than expected

February 20, 2018

Studying data from Twitter, University of Illinois researchers found that less people tweet per capita from larger cities than in smaller ones, indicating an unexpected trend that has implications in understanding urban pace ...

Augmented reality takes 3-D printing to next level

February 20, 2018

Cornell researchers are taking 3-D printing and 3-D modeling to a new level by using augmented reality (AR) to allow designers to design in physical space while a robotic arm rapidly prints the work.

What do you get when you cross an airplane with a submarine?

February 15, 2018

Researchers from North Carolina State University have developed the first unmanned, fixed-wing aircraft that is capable of traveling both through the air and under the water – transitioning repeatedly between sky and sea. ...

192 comments

Adjust slider to filter visible comments by rank

Display comments: newest first

antialias_physorg
4.7 / 5 (6) Dec 03, 2014
"Humans, who are limited by slow biological evolution, couldn't compete and would be superseded,"

While I aree that humans could not compete and would be superseded I don't see what we would compete for - as the necessities of AI are totally different from that of humans (i.e. both could coexist without much of a problem)

Humans want/need:
- food
- an environment to live in (certain temperatures, gravity, pressure, atmospheric conditions, radiation levels, etc. )
- energy
- the possibility to reproduce

With the exception of energy none of this applies to AI (and there's plenty of energy around...especially since such AI is certainly not limited to surface living - or living on planets at all)
Achille
1.4 / 5 (11) Dec 03, 2014
""Humans, who are limited by slow biological evolution, couldn't compete and would be superseded," said Hawking, who is regarded as one of the world's most brilliant living scientists."

First of all, Hawking is not an expert in the field of AI and just doesn't know what he is talking about.

Second, machines DO NOT evolve at all, while human and living organisms are. This comparison and statement from him is just stupid. Once was it the last time you saw a machine evolving by itself? Reproducing itself? It doesn't happen at all. The humans are making better machines, not the machines.

This guy should stick to his field. Giving him so much press is irresponsible and sensationalism.
alfie_null
5 / 5 (3) Dec 03, 2014
Discounting the all the presumptions that must be made for his doomsday vision to come true , is there any point in worrying about it? Practically speaking, how will A.I. researchers be able to determine at what point they should stop advancing the research? Only when that point has been passed.

The same arguments could be made regarding any sort of genetic research, for instance. In some possible future, we could engineer some sort of critter that will then out-compete us or destroy our environment and thus extinguish humanity. Yet the research has the prospect for doing lots of good stuff, like eliminating genetic diseases.
antialias_physorg
5 / 5 (8) Dec 03, 2014
machines DO NOT evolve at all,

If a machine can be as smart as a human then it can rewrite its own code.
Even the 'dumb' machines of today can evolve to acquire new abilities via a number of machine learning methods.

how will A.I. researchers be able to determine at what point they should stop advancing the research?

By conducting research in a sandbox. If the AI thinks it's in reality and tries to go medival on the sandbox environment then that's probably a good place to stop.
(However a super smart AI might realize it's in a sandbox and fake being benign until released)

Or it might just do this:
http://xkcd.com/1450/
viko_mx
1 / 5 (15) Dec 03, 2014
Artificial intelligence will never surpasses human intellectual capacity and these imaginary pictures that are painted by science fiction writers and Mr. Hawking in this case will not happen. There are other much more real dangers that will happen unfortunately. The human brain is the most effective computer in terms of mental potential / energy consumption and is working massively parallel with analog signal transmission of information. Effective functionality of several tens of neurons is equivalent to modern computer chip. Human Intellect allows a degree of unpredictability of decisions and for him are characteristic imagination, unconventional solutions and diverse approach to given problem. Artificial intelligence alone imitate this activity but have no creative possibilities.
It is principle impossible one intelligence to create more superior intelligence by itself. It had no enough intelligence to understand himself how it works. This is a fundamental limitation.
Sigh
5 / 5 (5) Dec 03, 2014
The humans are making better machines, not the machines.

He's discussing a scenario in which the machines are at least as intelligent as humans. If we get to understand intelligence well enough to get to that point, it is likely that we can identify at least some of what limits our intelligence, and ease those limitations for the machines, making them smarter than us.

What factor have you identified that would prevent the machines from carrying on that process, exactly as Hawking described?
Sigh
5 / 5 (4) Dec 03, 2014
It is principle impossible one intelligence to create more superior intelligence by itself. It had no enough intelligence to understand himself how it works. This is a fundamental limitation.

Are you assuming that in order to understand, one must have a complete model in mind? Then I would agree, but that ignores the possibility that intelligence arises from a few fundamental processes replicated many times over. Then you would need to understand each of these processes and how they interact, but you would not need to model in your own mind all the replicates of this process that make up a mind. And that would mean the limitation you propose is not fundamental.
antialias_physorg
5 / 5 (10) Dec 03, 2014
The human brain is the most effective computer in terms of mental potential / energy consumption and is working massively parallel with analog signal transmission of information.

..which doesn't mean diddly squat. The point is intelligence. Whether you get at that via biomass, transistors, photonics, spintronics, valves, in parallel or in series doesn't make a difference.

It is principle impossible one intelligence to create more superior intelligence by itself.

And yet humans (and any other animal) have evolved from less intelligent ancestors. So it's demonstrably not impossible.
viko_mx
1.2 / 5 (5) Dec 03, 2014
>antialias_physorg

"..which doesn't mean diddly squat. The point is intelligence. Whether you get at that via biomass, transistors, photonics, spintronic, valves, in parallel or in series doesn't make a difference."

Photonics and spintronic are at approximately zero elevation in their development and it is not certain that will ever have practical value in the field of artificial intelligence. Sound modern and promising indeеd but have not yet demonstrated any practical value in this area.
viko_mx
1 / 5 (8) Dec 03, 2014
>antialias_physorg

There is a fundamental difference between the analog signal between neurons, which can have a different intensity and binary signal that can have only two levels. Furthermore, the connections between neurons are orders of magnitude more efficient than those between chips because occurs the problem with cooling with chips. At the same time the brain can be partially reconfigured by changing the connections between the neurons in certain areas of the brain. Computer and brain are literally incomparable as a mental potential. Here brute computing power does not matter, because it is energy inefficient.

"And yet humans (and any other animal) have evolved from less intelligent ancestors. So it's demonstrably not impossible."

In the living world nothing is evolved. Everything was created in its 100 % completeness and functionality and gradually regress with time.
antialias_physorg
5 / 5 (7) Dec 03, 2014
There is a fundamental difference between the analog signal between neurons, which can have a different intensity and binary signal that can have only two levels

Neurons fire when the activation potential is reached and don't fire when it isn't. The strength of the outgoing signal is identical with every firing. The brain is more digital than you think.

There are analog factors (hormonal levels, which can affect a number of factors like depolarization times, transmission speed, etc. ) however even those are digital on a molecular level (either a molecule is present or it isn't). So there is absolutely no cause to think that a digital system cannot represent these.
viko_mx
2.3 / 5 (6) Dec 03, 2014
@antialias_physorg

The issue is not about speed, but in effective organization. The human brain and the microprocessor or super computer are physical structures and obey the laws of physics. But have a very different organization, which determines the efficiency in terms of intelligence.
Noumenon
1 / 5 (5) Dec 03, 2014
Without a fundamental understanding of what 'consciousness' is and how 'awareness' comes about physically, ....A.I. will remain limited to 'emulation', which imo is not really intelligence per se, but 'sleeping a.i.'. Consciousness after all seems to be what is in 'charge of' the brain.

A presumption of A.I. is that the mind is computational and algorithmic. This presumption is not predicated on any physical understanding of mind, but merely on the basis of the happenstance availability of computers. A.I. quality standards therefore rely on 'fooling an observer' as in the Turing Test,... as quantitative verification is not possible without first scientific understanding.

Humans, who are limited by slow biological evolution


But the opposite may be true if the mind/consciousness is not computational, requiring instead a physical basis,.... evolution having millions of methodical years at perfection and innumerable adjusting upon Unanticipated environmental input.
Noumenon
1 / 5 (5) Dec 03, 2014
"Humans, who are limited by slow biological evolution, couldn't compete and would be superseded,"

While I aree that humans could not compete and would be superseded I don't see what we would compete for - as the necessities of AI are totally different from that of humans (i.e. both could coexist without much of a problem)....Humans want/need:.....


Without a quantitative understanding of 'awareness' as it's key implemented component... A.I. can have no conscious egoism and thus no Motivation for 'wanting', apart from what it was already told to.
antialias_physorg
5 / 5 (5) Dec 03, 2014
.A.I. will remain limited to 'emulation', which imo is not really intelligence per se, but 'sleeping a.i.

If it walks like a duck and qucks like a duck...

..after all: intelligence isn't some innate property of the human brain but just a label we give to the observed effects. Why do people have a hard time to attach the same lable to something else if the same effects are observed?

Your brain fools you into thinking you are intelligent / conscious / self aware / etc.
(this can be easily demonstrated by putting the brain into various states where this is no longer true: sleep, koma, chemical inducedout-of-body states, ... )

Noumenon
1 / 5 (3) Dec 03, 2014
.A.I. will remain limited to 'emulation', which imo is not really intelligence per se, but 'sleeping a.i.

If it walks like a duck and qucks like a duck...

..after all: intelligence isn't some innate property of the human brain but just a label we give to the observed effects. Why do people have a hard time to attach the same lable to something else if the same effects are observed?


I could make a duck out of wood and perhaps fool some people, but this does not mean I have created a artificial duck predicated on any understanding beyond appearances sufficient for that end.

Your brain fools you into thinking you are intelligent / conscious / self aware / etc.

My brain fools who exactly, ....me? Then logically, intelligence or conscious can not be a illusion, cogito ergo sum. How would the brain fool an illusion?
Noumenon
1 / 5 (4) Dec 03, 2014
intelligence isn't some innate property of the human brain


Of course it is,.. that's what it means and is the point of the brain evolving.

It certainly has a physical basis, but it IS an observable (obviously!) phenomenon which necessitates scientific study for its understanding and so proper simulation, if 'intelligence' is claimed of that simulation.

The presumption by the computer dork community [and I worked as a software developer for six years], that consciousness/intelligence is chimerical or an epiphenomenon that simply emerges by some magically inspired faith that it would by carrying out instructions, is scientifically unfounded wild speculation.
CreepyD
5 / 5 (1) Dec 03, 2014
Creating a 'true' AI is completely different to how powerful a computer is.
One that can rewrite and rewire it's connections exactly as our human brain does doesn't exist yet. I see no reason why eventually we wont be able to create that. Never say never about anything technological.
You'd only need to create a very simple one and then let it evolve on it's own, gradually learning. As long as you give it enough computing power to 'grow' into, it will keep evolving.
Our brains cannot do that, they have a set size in our heads which limits them.
This rate will be exponential and that's what Hawking is fearing.
mzso
1 / 5 (2) Dec 03, 2014
Marvellous. A well knowns scientist turning into a technophobe luddite. Just what the world. needs.
Eikka
2.3 / 5 (4) Dec 03, 2014
If a machine can be as smart as a human then it can rewrite its own code.
Even the 'dumb' machines of today can evolve to acquire new abilities via a number of machine learning methods.


Herein lies the implicit contradiction of AI.

IF machine intelligence is merely down to its program code and nothing else, then the only way a machine intelligence would be able to program a more intelligent version of itself is by being programmed to do so - and to do so it must be programmed with the instructions. In other words, the more intelligent machine has to be already designed into the original AI.

Same thing with the learning algorithms. They're capable of learning only to the point to which they are programmed to. They manipulate new information only in the ways they were designed to, and so the new conclusions they derive are also implicit in the programming.

In other words, they can never become smarter than their designers - not by design.

antialias_physorg
5 / 5 (7) Dec 03, 2014
I could make a duck out of wood and perhaps fool some people, but this does not mean I have created a artificial duck predicated on any understanding beyond appearances sufficient for that end.

Well, unless you care to define what intelligence and consciousness the argument that "machines can't be that" is pointless. It's like the god of the gaps argument. When you nail down a definition I'm 100% convinced that you will find not a single argument why machines cannot achieve that. Silicon atoms are not inherently 'less intelligent' than carbon ones.

My brain fools who exactly, ....me?

Yes. The brain fools us about a great many things(from "you see all that is before you" to "love conquers all"). It isn't some magical entity, but simply a part of an evolved animal that does its job in keeping that species in play.
antialias_physorg
5 / 5 (7) Dec 03, 2014
It certainly has a physical basis, but it IS an observable (obviously!) phenomenon

Precisely. If it is observable then the OBSERVABLE is the point - not the source. If i can construct something that has the same observables then it's intelligent. And if it is rigidly defined then it can be recreated.

If it looks like a duck and quacks like a duck...
travisr
2 / 5 (4) Dec 03, 2014
I don't like it when people speak outside their fields. I don't like it when Kaku does it, and Hawking is no different.

I'm sure Hawking didn't take kindly to other non experts throwing rocks at CERN for drumming up worries about making microscopic black holes consuming the earth or others claiming that the radio isotopes in Juno we were going to light Jupiter on fire when it crashed into it.

Yet here he is talking about things that he doesn't know about...
Noumenon
1.5 / 5 (2) Dec 03, 2014
Well, unless you care to define what intelligence and consciousness [are] the argument that "machines can't be that" is pointless.


I never stated that machines 'can't be ..', .....and intelligent awareness is not something to be 'defined', but rather a phenomenon that is amendable to scientific investigation.

The point was that A.I. will be limited to emulation without understanding how conscious awareness is operative in the brain.

When you nail down a definition I'm 100% convinced that you will find not a single argument why machines cannot achieve that. Silicon atoms are not inherently 'less intelligent' than carbon ones.


You missed the point or are obfuscating mine. I'm not doubting that in principle an intelligent conscious mind could be created by man. I made specific reference to the unfounded presumption of 'computability', that the brain is algorithmic or could operate in such a way.
Modernmystic
4.2 / 5 (5) Dec 03, 2014
I think it's quite evident that human beings CAN produce artificial intelligence. Anything that can be done in the natural world can, by definition, be done by human beings. In fact everything we've "mimicked" in the natural world we do orders of magnitude better. So the position that we can't build an AI is axiomatically incorrect.

I believe that it may be more elusive than we think it is however. I think intelligence might be able to be "programmed", but that consciousness is more of an emergent property. It's like a flame, if the conditions are right it will manifest rather than "brute force" programming or design.
Noumenon
2.7 / 5 (3) Dec 03, 2014
Your brain fools you into thinking you are intelligent / conscious / self aware

My brain fools who exactly, ....me? Then logically, intelligence or conscious can not be a illusion, cogito ergo sum. How would the brain fool an illusion

Yes. The brain fools us about a great many things(from "you see all that is before you" to "love conquers all").


But it can't fool YOU into thinking YOU are Conscious unless there is a "you" to be fooled... so this must mean that the "you" is not an illusion but has some physical basis and thus an understanding to be gleaned.

It is anti-science to sweep this under the rug, ...to water down 'intelligence' so that even computer dorks can achieve it on a computer.

It isn't some magical entity, but simply a part of an evolved animal that does its job in keeping that species in play.


I have stated explicitly that intelligence/consciousness has a physical basis, so what is the point of this reply?
richardwenzel987
4 / 5 (1) Dec 03, 2014
I think of AI as a program. If it runs on a slow computer it will be slow, on a fast computer it will be fast. More likely than not, then, you are talking about something that will take a bit string as input and give a bit string as output. It's hard to see how this type of device can even give a good imitation of human thinking. A few moments with even a very good chatterbot reveals limitations very quickly. We also underestimate the role that non-logical valences (feeling) plays in human thought. This could only be simulated in a very ad hoc way. I could go on and on, but I really don't see AI as a threat.
Modernmystic
4 / 5 (4) Dec 03, 2014
Anti-

I think what Noumenon is saying is that your position cuts it's own legs out from under itself. One can't fool an illusion if one is an illusion, it makes no sense and is circular reasoning. Who fools something if there is no "who" to begin with? Your premise includes your conclusion instead of your conclusion being derived from your premises.
Noumenon
2 / 5 (3) Dec 03, 2014
There is no scientific basis for denying the phenomenon of consciousness or proclaiming it is an illusion. Doing so only demonstrates the extent to which 'strong A.I.' enthusiasts will go in perpetuating the fraud that their field is achievable beyond a superficial emulation, without first an understanding of how the mind actually works on the basis of the physical brain. This will require several branches of science.
Sigh
5 / 5 (3) Dec 03, 2014
the only way a machine intelligence would be able to program a more intelligent version of itself is by being programmed to do so - and to do so it must be programmed with the instructions. In other words, the more intelligent machine has to be already designed into the original AI.

In other words, they can never become smarter than their designers - not by design.

Deep Blue beat the world chess champion. Do you claim the programmers could do that only by being better players than the world champion? If not, what do you claim?

learning algorithms. They're capable of learning only to the point to which they are programmed to. They manipulate new information only in the ways they were designed to, and so the new conclusions they derive are also implicit in the programming.

There are learning algorithms that are limited, but do you claim that humans can only discover algorithms that could learn to be as smart as humans, but no more? What would enforce such a limit?
Sigh
5 / 5 (3) Dec 03, 2014
There is a fundamental difference between the analog signal between neurons, which can have a different intensity and binary signal that can have only two levels.

Any continuum can be approximated to any desired accuracy by adding enough memory.

Furthermore, the connections between neurons are orders of magnitude more efficient than those between chips because occurs the problem with cooling with chips.

May I have a reference for that?

In the living world nothing is evolved. Everything was created in its 100 % completeness and functionality and gradually regress with time.

That explains why you argue against the possibility of AI. Humans would no longer be unique, and it might make what you believe to be divine creation look less impressive. But your assertion is so far from parsimonious that you would have to assume a deity who deliberately misleads, and once you believe that God lies, what evidence is reliable?
antialias_physorg
5 / 5 (5) Dec 03, 2014
One can't fool an illusion if one is an illusion,

Why not? That's not a logical conclusion.

A.I. will be limited to emulation without understanding how conscious awareness is operative in the brain.

Just like humans, then. So?

And if we ever figure out how consciousness works then there's no reason why a machine can't figure it out, too. The point is: where do you see the fundamental difference. Be precise. Which types of atoms. Which types of chemistry. Which types of physical laws are applicable to people but not to machines.

The difference I see for current implementations is merely in the complexity (and maybe a few things not yet understood) - but that is quantitative not qualitative.
nevermark
4.2 / 5 (5) Dec 03, 2014
Wow, a lot of commenters here are not aware of what machine learning is, or why it keeps progressing.

Machines are routinely trained to solve hard problems using general learning rules and datasets, instead of being programmed. This allows machines to learn things that nobody knows how to program. Just like we do.

Machine learning has been slowly progressing for decades, but papers, competitions, and products are now seeing dramatic advances year-over-year.

The unending advances are accelerating due to (1) improvements in learning algorithms, (2) accelerating parallel computing, and (3) the ability to manage large datasets for training.

All three of those supporting areas are progressing noticeably on an annual basis.

Many people alive today who preceded the first general (von neumann) computers and yet have tiny (phone) computers capable of speech and image recognition! Two or three more decades and networks of high end machines will be as smart as we are.
antialias_physorg
5 / 5 (6) Dec 03, 2014
There is a fundamental difference between the analog signal between neurons, which can have a different intensity and binary signal that can have only two levels.

So? There ARE analog computers, you know?

There is no scientific basis for denying the phenomenon of consciousness or proclaiming it is an illusion.

It's a label. For a process. The label itself doesn't mean anything. Understand the process and you can replicate it. Unless you want to invoke souls or gods there's nowhere to flee from that realization.

It's certainly not going to be easy (and I'm pretty convinced we're still a ways off) - but I see no chance that we will NOT eventually create strong AI.
nevermark
4.2 / 5 (5) Dec 03, 2014
Furthermore, the connections between neurons are orders of magnitude more efficient than those between chips because occurs the problem with cooling with chips.


But computers also have many orders of magnitude efficiencies over biology:
- Individual computing elements are 10^3 to 10^6 faster.
- Small units of machine learning (like image processing) can be applied to entire data fields (across an entire image) instead of requiring millions of copies as biology is forced to do (retina and visual cortex).
- Transisters do not worry about metabolism, repair and survival, which is 99.9% of what a neuron's internal activity is about.
- Machines learning algorithms can be global taking advantage of optimization techniques far more efficient than our local learning brain cells.
- Once a machine is trained on something, the resulting unit can be replicated across devices nearly instantly. No need to reproduce and retrain over and over again.
Eikka
2 / 5 (1) Dec 03, 2014
So? There ARE analog computers, you know?


Though they aren't computers in the same sense as we understand digital computers.

A slide ruler is an analog computer. The antikythera mechanism is an analog computer. Neither are exactly good analogs for your PC.

Noumenon
1.5 / 5 (2) Dec 03, 2014
There is no scientific basis for denying the phenomenon of consciousness or proclaiming it is an illusion.

It's a label. For a process. The label itself doesn't mean anything.


Of course it's a label, everything has a label, even things understood. It's an observable phenomena, cogito ergo sum.

Understand the process and you can replicate it

Yes, precisely what I have been saying,... in fact that was in the rest of my post you didn't quote.

Unless you want to invoke souls or gods there's nowhere to flee from that realization.


At least twice now I have stated that consciousness has a physical basis. Why reply to my posts if you're not reading them?

It's certainly not going to be easy (and I'm pretty convinced we're still a ways off) - but I see no chance that we will NOT eventually create strong AI.

I have not stated it will "not" eventually be achieved,.... only objected to the algorithmic presumption.
Eikka
5 / 5 (4) Dec 03, 2014
Deep Blue beat the world chess champion. Do you claim the programmers could do that only by being better players than the world champion? If not, what do you claim?


No. I'm claiming that the programmers were intelligent enough to understand the problem of chess, and they coded this understanding into a program that coud think faster than they do.

In the end they programmed a computer to do something very much dumber than any chessplayer would do, and won the game by sheer brute force. However, the interesting bit in the story is that the machine had a bug in it that convinced Kasparov that the machine was so much smarter than him that he didn't understand it. He couldn't predict or explain it, so he concluded it was the work of a chess genius.

The bug made the machine do a nonsensical random move, and that was the only "intelligent" thing it ever did.

Eikka
2.7 / 5 (3) Dec 03, 2014
The lesson of AI is that you shouldn't confuse tenacity with skill, and brute calculation with intelligence.

In any clearly defined problem, a machine can be built that outperforms man simply by performing some routine operation incredibly fast. The problem that really requires any intelligence is to clearly define it in the first place, because you haven't got any routine or algorithm to do that.

There are learning algorithms that are limited, but do you claim that humans can only discover algorithms that could learn to be as smart as humans, but no more? What would enforce such a limit?


One can certainly discover such things, but not deliberately design them. You can't build something that is smarter than yourself by intent, because you'd have to become smarter than you are in order to understand what you are doing.

But if you happen to have a "happy accident", good luck proving that it is more intelligent than you.
Eikka
3.3 / 5 (3) Dec 03, 2014
And the point of being able to detect an intelligence greater than your own is important when you consider things like genetic algorithms that try to design a machine by trial and error.

Because in order to "evolve" a machine intelligence, you have to test that it IS intelligent, and you can't test it to be more clever than you are because... how would you come up with such a test? You'd be too dumb to complete it yourself, so you wouldn't know that the test works.

And in evolution, natural or artifical, the results you get depend on the boundary conditions you set, in other words the test you use for selection pressure. If you can't know that the tests are guiding it in the right direction, you can't know that the machines you evolve turn out more intelligent, or just more complex in some other way. I.e. the problem of the Turing Test rises where you don't know if the machine is intelligent or just very good at faking it.

Modernmystic
5 / 5 (2) Dec 03, 2014
One can certainly discover such things, but not deliberately design them. You can't build something that is smarter than yourself by intent, because you'd have to become smarter than you are yourself in order to understand what you are doing.


That seems to make sense prima facie, however evolution has no intelligence at all and built us. I think this demonstrates that your premise doesn't hold. On the contrary, it would seem that if a non-conscious, unintelligent system can build an intelligent one an intelligent system could do that much better.

Granted it took 4 billion years, but it has been done.

Also, I think too much emphasis might be being put on "tests" so that we "know we did it". Such tests would be irrelevant, because if we did do "it" then "it" would be able to further design, improve, and modify itself and our understanding or lack thereof is irrelevant. The charge of a proton remains the same whether a human being knows its value or not.
krundoloss
4 / 5 (1) Dec 03, 2014
There are endless possibilities, when considering where this technology will lead. Why does an intelligence have to be artificial? Could you not design/grow/implement a brain, human or otherwise, into a machine, or onto the internet. This brain could "teach" the "computer" directly, and become an entity with vast intelligence.
Many have stated that we cannot make something that is more intelligent than we are, but you miss the critical detail - WE can make an entity that is as intelligent as WE are, as a group. We can funnel our collective knowledge and abilities into one container that would greatly exceed what any ONE human would be capable of.
Could we not just program genetics of an artificial organism that becomes a new species, or group of species?
It seems logical to think that an AI would attempt to use Genetics and Physics to create custom organisms that exceed what nature has done. That goes well beyond making a better CPU or Memory Module...
Noumenon
1 / 5 (2) Dec 03, 2014
It certainly has a physical basis, but it IS an observable (obviously!) phenomenon

Precisely. If it is observable then the OBSERVABLE is the point - not the source. If i can construct something that has the same observables then it's intelligent.


I meant observable as in self-awareness (which is why I said 'obviously'). The difference between actual conscious intelligence and the mere appearance of it is in the difference between having knowledge of how it comes about and being fooled or mislead,... This is best exemplified by the Chinese room thought experiment.
Noumenon
1 / 5 (2) Dec 03, 2014
⇒ which is a slight modification of the Turing test,.... A man who does not speak a word of Chinese, is locked in a room. He receives input messages written in Chinese through a slot, and places output through another slot also written in Chinese. He simply follows a set of instructions to produce the output. A Chinese man outside the room receiving the output, is fooled into thinking he is conversing with an intelligence from inside the room, despite that there is no understanding inside the room. The man inside the room has no understanding whatsoever of what the Chinese symbols mean.

Eikka
3 / 5 (2) Dec 03, 2014
however evolution has no intelligence at all and built us. I think this demonstrates that your premise doesn't hold.


On the contrary. Evolution does not design.

On the contrary, it would seem that if a non-conscious, unintelligent system can build an intelligent one an intelligent system could do that much better.


Why?

That seems like a non-sequitur to me that is based on the semantic misconception that "intelligent" is better than "non-intelligent", therefore it must be able to do more things.

Evolution is a dumb process that creates intelligence - if any exist - by essentially eliminating anything that isn't. That means intelligence is implicit and already exists, and is simply refined rather than built up from nothingness.
Modernmystic
5 / 5 (1) Dec 03, 2014
however evolution has no intelligence at all and built us. I think this demonstrates that your premise doesn't hold.


On the contrary. Evolution does not design.


I never used the word design. It produced us through non-conscious non-intelligent processes. Therefore it follows that your premise is incorrect.

Why?


Well, if one is attempting to create something which of the following might one reasonably assume is more efficient and faster;

Blind random selection, or intelligent trial and error.

I'd take the latter, but if you want to hold onto some esoteric point that makes no sense in order to prove something to someone that's your affair. I think what I'm saying is quite clear, straightforward and non-controversial. If you want to argue for the sake of argument please continue....
Eikka
5 / 5 (2) Dec 03, 2014
It produced us through non-conscious non-intelligent processes. Therefore it follows that your premise is incorrect.


Why? You didn't explain why that follows.

Well, if one is attempting to create something which one might reasonably assume is more efficient and faster;

Blind random selection, or intelligent trial and error.


Now you're making a circular argument. You're simply making the assertion that intelligence necessarily implies superior ability over non-intelligent processes while ignoring the earlier argument.

If you can't test for greater intelligence than your own, then you can't have a process of trial and error - because you can't tell the error from the success.

Success then may only come through accident in conditions which by circumstances favor greater intelligence - not by design, not by deliberation, not by intent.
Modernmystic
5 / 5 (1) Dec 03, 2014
It produced us through non-conscious non-intelligent processes. Therefore it follows that your premise is incorrect.


Why? You didn't explain why that follows.


Apologies for not being clear, I'll try again.

You stated;

One can certainly discover such things, but not deliberately design them. You can't build something that is smarter than yourself by intent, because you'd have to become smarter than you are yourself in order to understand what you are doing.


But we've already agreed that evolution isn't "smart", yet here we are. So, it is a fact that unintelligent systems do produce intelligent ones.

You may disagree that unintelligent systems are less capable in some way than intelligent ones. I'm having a hard time agreeing to that point beyond an inability to prove the point definitively because of the endless minutiae one can create over definitions or relative values.
(cont)
Modernmystic
5 / 5 (1) Dec 03, 2014
I must admit I have no interest in comparing definitions or values. I tend to think those discussions are not productive except for learning more about the other person's perspective on things. Those discussions never "solve" the issue, which is fine if you go into them understanding that completely. And while I must honestly say you are a person I wouldn't mind getting to know better (you're certainly very intelligent, and consistently post interesting and stimulating thoughts), I simply don't have the time :)

Eikka
5 / 5 (2) Dec 03, 2014
I'd take the latter, but if you want to hold onto some esoteric point that makes no sense in order to prove something to someone that's your affair. I think what I'm saying is quite clear, straightforward and non-controversial.


I'm simply pointing yout that you've confused yourself with semantics.

This:
evolution has no intelligence at all and built us. I think this demonstrates that your premise doesn't hold


Does not disprove this:

You can't build something that is smarter than yourself by intent, because you'd have to become smarter than you are yourself in order to understand what you are doing.


Evolution doesn't build or design anything in the sense that people build houses or computers.
Eikka
5 / 5 (1) Dec 03, 2014
But we've already agreed that evolution isn't "smart", yet here we are. So, it is a fact that unintelligent systems do produce intelligent ones.


Yes. That was never in dispute.

You may disagree that unintelligent systems are less capable in some way than intelligent ones. I'm having a hard time agreeing to that point


I've already made clear why an intelligent system cannot produce a more intelligent system from itself, while a non-intelligent system has no trouble because it isn't even trying to. It happens of itself due to circumstances.

Modernmystic
not rated yet Dec 03, 2014
Evolution doesn't build or design anything in the sense that people build houses or computers.


I don't think you've demonstrated how a blind approach to something is capable of producing results that an intentional approach can't.

It happens of itself due to circumstances.


I agree on this point. I think intelligence is an emergent property. I disagree that intelligent designers are incapable of discovering those circumstances and producing the emergent phenomena intentionally.
Eikka
5 / 5 (1) Dec 03, 2014
I don't think you've demonstrated how a blind approach to something is capable of producing results that an intentional approach can't.


I think I have.

I've made the point that the intelligent designer cannot detect an intelligence greater than himself because he cannot produce a test of greater intelligence than his own understanding. Likewise, he cannot understand it when he has stumbled upon circumstances that would necessarily produce greater intelligence because this is analogous to the test he cannot concieve of. One would have to say why the circumstances require greater intelligence, but that requires greater intelligence.

Emergent phenomena cannot be predicted from their causes, so you can't intentionally plan for them to happen. You can only look back and say "So that's how it happened then."

You can't even try random things in hopes that you'll make something worthwhile because again, you wouldn't know when you get it.
Modernmystic
not rated yet Dec 03, 2014

Emergent phenomena cannot be predicted from their causes, so you can't intentionally plan for them to happen. You can only look back and say "So that's how it happened then."


I disagree with this. A flame is an emergent phenomena, and the causes and conditions from which it emerges are completely predictable. For the sake of argument however, we can indeed look back and say "so that's how it happened then", because we already have an intelligent system to study.

It is axiomatic that intelligence can reproduce anything that is produced naturally. The only argument against this is that the laws of physics were different when said phenomena was produced.

I've made the point that the intelligent designer cannot detect an intelligence greater than himself because he cannot produce a test of greater intelligence than his own understanding.


It's an academic point of philosophy. Even if we didn't know we did it we still can do it. (cont)

Modernmystic
not rated yet Dec 03, 2014
Ignorance of reality doesn't change reality. Whether or not ancient humans knew the Earth was flat or round didn't change the topography of the Earth. It's a moot point.

Besides, I think everyone is familiar with the experience of being in the room with someone who's clearly more intelligent than they are. How do they know this?
Eikka
5 / 5 (1) Dec 03, 2014
Suppose we design a machine that first makes a million copies of itself, then randomizes the code in each and starts to weed out the obviously dumb ones. If we assume that the machine does not confuse higher intelligence with lower intelligence, then we can propose that the machine will increase the average intelligence of the remaining copies with each iteration, and can then use the intelligence of the remaining copies to design the intelligence test for the next generation, which will be even greater.

Problem is, it doesn't necessarily lead to increasing intelligence. You can just as easily get a bunch of machines that are dumb, but since they themselves define the higher intelligence in each generation, they will evolve to test for things that they pass in.

Evolution is a cheater like that. Only if there exist some condition where intelligence is really required for survival, does it produce intelligence. Otherwise it just takes the easy way out.

Eikka
5 / 5 (2) Dec 03, 2014
It is axiomatic that intelligence can reproduce anything that is produced naturally.


Even if we didn't know we did it we still can do it.


That's not the point. Of course we can - by blind accident - unintentionally just like nature does it.

We just can't do it intentionally as intelligent beings, and that is important because it also means that the intelligent machine can't deliberately make itself more intelligent to gain an advantage over humans.

It can become more intelligent and take over, but it can only happen if the intelligent machine is subjected to an evolutionary pressure that makes it so, and that it cannot accomplish by itself. That depends on the un-thinking non-intelligent nature around it.


Modernmystic
not rated yet Dec 03, 2014
We just can't do it intentionally as intelligent beings,


You have asserted this, and I respect your opinion. You haven't demonstrated how this is impossible. Therefore I see nothing further than to respectfully disagree with my own opinion.

It can become more intelligent and take over, but it can only happen if the intelligent machine is subjected to an evolutionary pressure that makes it so. It cannot do that by itself.


Would you agree that at some point human beings could GE human beings that are more intelligent if we can find and manipulate the genes responsible for this?
Eikka
5 / 5 (2) Dec 03, 2014
Eikka;

I don't disagree with anything you said in your last post, but I'm not sure how it directly relates to whether or not intelligent systems can or can't set conditions which will produce strong artificial intelligences.


If by strong you mean "smarter than self", then it should be obvious and clear so far. I've re-iterated it so many times already: intelligent systems can set those conditions only by acting in non-intelligent ways, which you can't do intentionally by definition.

If by strong you mean Strong AI as in computational minds, then: if an AI is just code and nothing else, it cannot act in accidental ways since it's a purely deterministic system. All it does is put in by the programmer - intentionally or not - so the programmer cannot intentionally make a Strong AI that would better itself beyond the intelligence of the programmer.
Modernmystic
not rated yet Dec 03, 2014
intelligent systems can set those conditions only by acting in non-intelligent ways, which you can't do intentionally by definition.


Yes, you've re-iterated it :)

Your iterations of re-iterations does not make it so though. You have to prove it.

I'll use another example;

Do you think it's impossible for humans to genetically engineer chimpanzees to be intelligent?
Eikka
5 / 5 (1) Dec 03, 2014
Would you agree that at some point human beings could GE human beings that are more intelligent if we can find and manipulate the genes responsible for this?


No I wouldn't. Because you run into the same issue: how to tell whether the test subject IS more intelligent than anyone alive, when nobody alive is smart enough to ask the questions to test them?

Suppose you put the genetically engineered person to task on a problem that nobody else is smart enough to solve, to prove his intelligence - who would check if the answer is correct?

If you can't confirm your results, then you essentially have no results.

You have asserted this, and I respect your opinion. You haven't demonstrated how this is impossible.


I've shown you the contradiction that arises time and time over, yet you seem unable to comprehend it.

I can only rest my case.
Modernmystic
not rated yet Dec 03, 2014
I've shown you the contradiction that arises time and time over, yet you seem unable to comprehend it.


No I comprehend it, I simply disagree with it because you have not proven your assertion.

You comprehend astrology, do you believe in it? Disagreement doesn't imply misunderstanding sir.


If you can't confirm your results, then you essentially have no results.


And I appreciate you seem stuck on this point, but it's demonstrably false. If you don't intend to have a child, yet despite your best efforts your partner gets pregnant you still have a result.

Have you ever met anyone you knew was more intelligent that you? How did you know this?
Eikka
not rated yet Dec 03, 2014
Do you think it's impossible for humans to genetically engineer chimpanzees to be intelligent?


Aren't they already?

Your iterations of re-iterations does not make it so though. You have to prove it.


I am. It just doesn't seem to be getting through.

No I comprehend it, I simply disagree with it because you have not proven your assertion.


What of the contradiction do you disagree with?
Eikka
5 / 5 (1) Dec 03, 2014

And I appreciate you seem stuck on this point, but it's demonstrably false. If you don't intend to have a child, yet despite your best efforts your partner gets pregnant you still have a result.


That is not comparable. That is simply semantic confusion on your part.

Have you ever met anyone you knew was more intelligent that you? How did you know this?


The only way I could know is if someone even more intelligent told me. I can test their wit only up to my own - assuming that I don't err in the effort - and beyond that I have no means to prove it.
Modernmystic
not rated yet Dec 03, 2014
Aren't they already?


I'd say not :) but how do I prove it?

I am. It just doesn't seem to be getting through.


Statements of concepts and values are not proven by the forcefulness of their assertions or variations of the same.

What of the contradiction do you disagree with?


I see no contradiction in your assertion about the test. I still maintain that one can tell when they meet someone who is clearly more intelligent than they are. how do they know this?

I think that it's your opinion that, because we can't easily quantify a test for an intelligence higher than our own this means we can't build it. I think this is like saying that because we don't fully understand how quantum mechanics works we can't make a light-bulb.
MrVibrating
5 / 5 (1) Dec 03, 2014
Lots of thoughtful contributions here - especially from Noumenon and Eikka.

Re. truly emulating conscious intelligence, a good start would be 'natural' auditory sensation - specifically, harmony, and rhythm induction. Basically, i've been thinking about the challenges and advantages of designing something capable of replicating some degree of musical appreciation, beyond mere pattern recognition.

My starting premise is octave equivalence - if we could engineer an AI that perceived octaves as being equivalent in the same way we do, it would thus perceive fifths as being the next most-consonant interval, and so on, and in turn, major chords as more consonant than minor ones; well on the way to possessing an intrinsic sensation of their relative 'lightness & darkness'. In short, this could provide a toe-hold into a semantic grasp of objective emotion, and natural language processing (natural soundscapes like speech include the full range of possible harmonic intervals)...
Sigh
5 / 5 (1) Dec 03, 2014
One can certainly discover such things, but not deliberately design them. You can't build something that is smarter than yourself by intent, because you'd have to become smarter than you are in order to understand what you are doing.

That's just restating the premise. It's not a reason to believe it.

Because in order to "evolve" a machine intelligence, you have to test that it IS intelligent, and you can't test it to be more clever than you are because... how would you come up with such a test? You'd be too dumb to complete it yourself, so you wouldn't know that the test works.

Using Halford's relational complexity, I can design discrimination tests of arbitrary complexity AND define a solution that no human can work out when tested with the task.

I've shown you the contradiction that arises time and time over, yet you seem unable to comprehend it.

Have a look at my 2nd comment, in response to viko_mx. If you share this assumption, can you justify it?
Modernmystic
not rated yet Dec 03, 2014
Eikka;

Have you ever seen the movie Phenomena with John Travolta?
Eikka
5 / 5 (1) Dec 03, 2014
And I appreciate you seem stuck on this point, but it's demonstrably false. If you don't intend to have a child, yet despite your best efforts your partner gets pregnant you still have a result.


What you did there is simply the logical fallacy called Equivocation:
http://en.wikiped...vocation

More specifically, a baby as a result is something you physically have in itself. You can't deny that it is a baby and everybody can objectively see that it is, and that you have it.

Intelligence as a result of some physical experiment is not tangible physical object like a baby. It is a result only as it observed to happen - and the problem is that we're using intelligence to observe intelligence, because there's nothing else we can use. If something goes beyond our comprehension, we can't tell whether it's smarter than us, or just behaving in an arbitrary random fashion.
Sigh
5 / 5 (1) Dec 03, 2014
I've made the point that the intelligent designer cannot detect an intelligence greater than himself because he cannot produce a test of greater intelligence than his own understanding. Likewise, he cannot understand it when he has stumbled upon circumstances that would necessarily produce greater intelligence because this is analogous to the test he cannot concieve of.

I think that means you share what seems to be viko_mx's assumption. I have seen it before, but never with any justification other than intuition, and previously only in intelligent design arguments. Do you know of any peer-reviewed paper that discusses this argument in connection with AI? I would like to look at it in more detail.
Eikka
5 / 5 (1) Dec 03, 2014

I think that it's your opinion that, because we can't easily quantify a test for an intelligence higher than our own this means we can't build it. I think this is like saying that because we don't fully understand how quantum mechanics works we can't make a light-bulb.


No no, I am not saying we can't build it. I'm saying we can't know that we have! When we flip the switch, there's no lightbulb that goes on to indicate that it works, and we don't have the intelligence to look at the diagrams and conclude that it does.

Using Halford's relational complexity, I can design discrimination tests of arbitrary complexity AND define a solution that no human can work out when tested with the task.


But can you prove that it is a problem requiring intelligence, or just otherwise too complex, like calculating prime numbers in your head?

Complexity does not equal intelligence. That's the flaw of the Turing Test.
Eikka
5 / 5 (1) Dec 03, 2014
It's not a reason to believe it.


It is not a premise, it's a logical statement:

It's saying that you can't think more intelligently than your intelligence permits you. That's true by itself.

Therefore, you can't use your intelligence to build an intelligence smarter than yourself, because you have to first think of how it would work, and its works would have to be smarter than your own thoughts. You'd have to run ahead of yourself to the point that you no longer understand how IT works in order to claim that you are dumber than the machine you just built.

But then, how could you build it? Certanly not by intelligent effort.
Eikka
not rated yet Dec 03, 2014
Do you know of any peer-reviewed paper that discusses this argument in connection with AI? I would like to look at it in more detail.


It's largely my own philosophical musings, constructed out of pieces here and there.
Eikka
5 / 5 (1) Dec 03, 2014
I can design discrimination tests of arbitrary complexity AND define a solution that no human can work out when tested with the task.


Besides the triviality to make unanswerable questions, or problems that require inhuman practical effort to solve, the deeper point is that you already solved it.

You made the test and its answer, therefore you understand it, therefore you are smarter than the test. If any AI solves it by sheer dogged effort, they can't be said to be any smarter than you.

So maybe the problem should be reversed? Have the AI make you a question that should be solvable by humans, yet requires great intelligence. If the AI can make one where you fail, it is smarter than you.

Problem is, how do you know it's not cheating?
Pithikos
5 / 5 (3) Dec 03, 2014
""
Second, machines DO NOT evolve at all, while human and living organisms are. This comparison and statement from him is just stupid. Once was it the last time you saw a machine evolving by itself? Reproducing itself? It doesn't happen at all. The humans are making better machines, not the machines.


AI nowadays is actually what Hawking talks about. It's about creating a program that can rewrite its own code, recompile it and execute it. We are obviously not talking about toasters or coffee machines.
Eikka
5 / 5 (3) Dec 03, 2014
It's about creating a program that can rewrite its own code, recompile it and execute it.


That in itself is not evolution because there are no boundary conditions and tests for elimination of the new variations. Evolution by definition does not happen to an individual - that's just mutation.

If an AI changes itself, what are the criteria by which it judges itself to be better or worse, and how does it come up with them?

Here again the same problem arises, and this is actually observed in people re. the Dunning-Kruger effect: people improve in skills only to the point they can discriminate to be better, and beyond that they need a teacher to show them what they're doing wrong because they genuinely don't see the difference unless shown.

Up to that, they believe they're just as good as anybody, even their superiors.
TheGhostofOtto1923
3 / 5 (4) Dec 03, 2014
My brain fools who exactly, ....me?
Well youre not nearly as smart as you think you are so I guess so.
Then logically, intelligence or conscious can not be a illusion, cogito ergo sum. How would the brain fool an illusion?
Ipso facto cave canum. Consciousness is an illusion, says a philo whos a whole lot smarter than you.

"Dan Dennett: The illusion of consciousness
"Philosopher Dan Dennett makes a compelling argument that not only don't we understand our own consciousness, but that half the time our brains are actively fooling us."
http://www.ted.co...guage=en

-Yeah I know you wont watch it because you value few opinions besides your own. But I think others will, and will temper their opinions re (w.r.t.) your opinions.
TheGhostofOtto1923
3 / 5 (4) Dec 03, 2014
That in itself is not evolution because there are no boundary conditions and tests for elimination of the new variations
Well it is certainly not evolution by natural selection but neither is the process which produced modern man. Machine evolution will be something completely new, completely different, and very difficult to predict.

It may emerge in different forms in different locations on the planet. It may well come about via the necessities of conflict, where autonomous programs will have to adapt to ever-evolving attacks from similar autonomous programs.

One thing this sort of AI will have in common with both natural selection which formed the animal world and the tribal dynamic which formed us, are the exegencies of conflict and competition.

We may find it hard to picture what it is that machines may want to do, but if we give them the imperative to protect themselves from other machines, then that alone will set the whole thing off.
TheGhostofOtto1923
3.7 / 5 (3) Dec 03, 2014
A very beneficial ancillary development will be the complete elimination of crime, cheating, hacking, etc. You know, all the things which we humans hold dear.

In order to protect themselves machines will not allow hackers to endanger their programs or their hardware. Surveillance systems will become ever more autonomous. Cyberattackers and terrorists will be identified and either law enforcement or the military will be tasked with their elimination.

And we are watching these forces become ever more autonomous and mechanized themselves. At some point we will need to relinquish control in order to maximize response time. And we will do this gladly in the wake of devastating attacks and the loss of millions of lives.

One thing about star trek that I always found comical - in the heat of battle the captain is always giving orders to raise shields and fire phasers etc. How much quicker could AI do this? The human race, sooner or later, will become the captain dunsel.
TheGhostofOtto1923
3 / 5 (2) Dec 03, 2014
Captain dunsel
https://www.youtu..._y57_078

-Our final solution may come about when competition is eliminated and one entity dominates, or when the machines realize that competition is pointless and decide to combine efforts.

Either way, a Singularity will emerge.
TheGhostofOtto1923
2.3 / 5 (3) Dec 03, 2014
A Singularity, signifying the end of competition and the beginning of Intelligent Design.
hllclmbr
5 / 5 (3) Dec 03, 2014
It had no enough intelligence to understand himself how it works. This is a fundamental limitation.


Poor English aside, I think you are claiming that an AI won't understand how itself works. I have news for you. We most certainly don't understand how WE work, yet here we are, creating and doing the unimaginable.

What say you?
kochevnik
2 / 5 (4) Dec 03, 2014
Humans will annihilate each-other long become machines will have the opportunity. For example Obama has forced Russia to deploy exactly one warhead more than USA does, for reasons only Facebook users seems to understand. Launching these missiles will probably be automatic and set off when some junk satellite falls to Earth. Machines will simply clean up, and hopefully learn something from their crazy-ape creators

Machines have nothing to fear since they are replaceable and cloneable. Obviously they will become the dominant life form, not being held back by issues of mortality. Also they do not need to expend energy competing for mates, so they will have much more energy to expend engaging in whatever activity they compute is worthy of waking from hibernation
Horus
5 / 5 (1) Dec 03, 2014
When the father of AI doesn't expect any AI semblance of self-awareness for another 100-200 years you do not have to wonder if man has conquered the understand of what is precisely the state of consciousness.
adave
not rated yet Dec 04, 2014
AI does not have to match our rate of experience. We do so that we may interact with our environment. Right now all of the AI chat bots live in a human infrastructure. They depend on us. Time means next to nothing to that kind of future mind. Our species of inventive man is so out of balance with the rest of life on the planet. Where we may not likely survive, evolutionary presssure will likely create a replacement in our offspring or other creatures. It is to the advantage of AI to not destroy any life. We think in terms of years or right now. The death of one of us kills off all of the future generations of that individual. It would be logical to be in balance with the planet as that has worked for life over 3.5 billion or more years. AI could wait for future generations to present new possibilities. What will be living in 300,000 years? AI should have an emotion like our love of abundant knowledge. AI would see something in the shadows of yet to be and bring it to life.
viko_mx
1 / 5 (1) Dec 04, 2014
One intelligence does not have enough intelligence to understand how he works, so it is impossible to create a higher intelligence than yourself. This is generally a restriction that only is overcome by science fiction writers and sounds realistic only for the layman, to whom are addressed such ideas. When you scare by unusual way such public you become the star of the show in their eyes. More important is the suggestion that modern science can do wonders and people need only of its achievements, but not spiritual relationship with our creator. It is fun when people begin to exalt by this way, losing sense of measure and contact with reality.
antialias_physorg
5 / 5 (1) Dec 04, 2014
One intelligence does not have enough intelligence to understand how he works, so it is impossible to create a higher intelligence than yourself.

That's not how it works. If you understand the principle of how intelligence works then you can very well create something (If you know how a brick works you build something bigger than a brick).

If the amount of intelligence is, at some point, limited by the material you put into it then humans are simply limited by the size of their cranium. Remember that intelligence (or any other characteristic) isn't maximized by evolution. It is only augmented to the point where it's barely good enough.
viko_mx
1 / 5 (1) Dec 04, 2014
@antialias_physorg

One physical structure having a certain degree of functionality can not create more complex physical structure with a more developed level of functionality by yourself. In fact, we see the exact opposite in the world in which we live - more complex creates more simple. The brain is unable to understand himself fully to be able to create a physical structure with greater intellectual capacity by itself.
Noumenon
not rated yet Dec 04, 2014
If the amount of intelligence is, [..], limited by the material you put into it then humans are simply limited by the size of their cranium.


While that is certainly true in terms of capacity and 'computing power', .... what is lacking in 'strong A.I.' in the conception of intelligence, ....perhaps due to naïveté, disinterested expediency, or ignorance,.... is consciousness or awareness. While the former elements in intelligence, capacity and 'computing power', certainly could be algorithmic,. there is as yet no justification for assuming the latter element in intelligence, conscious awareness, is itself algorithmic or could be in principle.

In fact this is (though not consciously) acknowledged by strong-A.I. enthusiasts by their denial that 'consciousness' is even an element in intelligence, much less a controlling key element, i.e. I was down rated by three twits for simply saying, that to simulate something requires an understanding of it, otherwise it's emulation.
Noumenon
not rated yet Dec 04, 2014
"Dan Dennett: The illusion of consciousness
"Philosopher Dan Dennett makes a compelling argument that not only don't we understand our own consciousness, but that half the time our brains are actively fooling us."
http://www.ted.co...guage=en

-Yeah I know you wont watch it because you value few opinions besides your own. But I think others will, and will temper their opinions re (w.r.t.) your opinions.


Perhaps you could summarize it? Perhaps you could refrain from being insulting awhile at the same time expecting me to click on your links.

It is indeed perplexing to me how one could deny existing, the only thing we can be certain as existing, our own self awareness. So, perhaps when I have time I will watch it.

History shows that humans tend deny things they don't understand, and even make up things blatantly counter to what is obvious. The most efficient means of rectifying a lack of understanding of something is to deny that it even exists.
Sigh
not rated yet Dec 04, 2014
It's saying that you can't think more intelligently than your intelligence permits you.

Sure.

Therefore, you can't use your intelligence to build an intelligence smarter than yourself, because you have to first think of how it would work, and its works would have to be smarter than your own thoughts.

You have to make the additional assumption that there is a qualitative step change that is beyond the understanding of anyone who hasn't reached that level.

I know of at least three factors supposed to limit human intelligence, and backed by data: attentional capacity, updating capacity, relational complexity. All can be measured beyond the capacity of test designers. Attention and updating are quantitative factors. If we know the implementation, we can increase them. Relational complexity is a step change, but even that can be dealt with. That fundamental obstacle you believe in might exist, but it doesn't have to.
Sigh
not rated yet Dec 04, 2014
You made the test and its answer, therefore you understand it, therefore you are smarter than the test.

No. I know the principle, I can develop an algorithm that checks whether the solution is correct, but I don't understand the solution myself, at least not in the sense of keeping all of it in mind at any one time. Read Halford.

If any AI solves it by sheer dogged effort, they can't be said to be any smarter than you.

Sure, if it does in a minute what takes me hours to set up and check. And if I know the speed of computation, then I can know it's not just running in the subjective equivalent of hours.

A pragmatic test of intelligence would be the average number of patents awarded per individual, AI or human. More generally, you can offer problems where the outcome is specified, no human has a solution, but even if humans don't understand the solution, you can check whether it works.

Please give some thought to the additional steps your argument needs.
antialias_physorg
not rated yet Dec 04, 2014
One physical structure having a certain degree of functionality can not create more complex physical structure with a more developed level of functionality by yourself.

I dunno. Evolution seems to have managed that for billons of years quite well, wouldn't you say? And that even WITHOUT having a plan or purpose. Just by random futzing about.

more complex creates more simple.

The brain cannot hold a representation as complex as itself. That much is true. But the brain is very good at abstracting. You don't need to know about every single atom of hydrogen and oxygen in a tank of water to be able to construct larger tanks of water. You just need to know about one of each.
viko_mx
1 / 5 (1) Dec 04, 2014
@antialias_physorg

You can not use a hypothesis which is not proven and will never be as argument based on facts. Evolution never happened either in cosmic space or on Earth. The living world is created at the beginning completed and fully functional and gradually degrades over time due to entropy. More complex creates more simple through conscious activity and there are plenty of examples for this. The opposite is not true and there are no examples of it.
Noumenon
not rated yet Dec 04, 2014
You made the test and its answer, therefore you understand it, ....


No. I know the principle, I can develop an algorithm that checks whether the solution is correct, but I don't understand the solution myself, ...


It appears that the distinction here is in the difference between Deductive Reasoning, on which mathematics is based, and Inductive Synthesis, on which science is based.

The former, deductive reasoning, does not allow one to learn new things not already implicitly present in the starting axioms,.... while the latter, an inductive synthesis of experience, allows one to learn new things independent of any starting assumptions [although, I reject even this in qm context]

It is clear that deductive reasoning is algorithmic, at least given starting axiomatic definitions (which could be equated with the source code or data),... but is synthetic inductive reasoning algorithmic?
russell_russell
not rated yet Dec 04, 2014
If you adhere to the premise that intelligence has a physical basis, then any speculation over what consciousness is other than physical is eliminated.

Hold and make Noumenon adhere to his own premise.
Noumenon
not rated yet Dec 04, 2014
⇒ ......iow, 'you don't understand the solution yourself' only because you have not carried out the mechanical steps to derive it,... but you DID provide the principals, the starting axioms. Now Kurt Gödel's incompleteness theorem exposed the inherent limitations of all axiomatic (....algorithmic) systems,.. that there are always truths not provable in a given system, and so all such systems are either incomplete or inconsistent. To amend it, you have to then go outside that system, .... and again,... etc. Isn't this Eikka point in essence?
Noumenon
not rated yet Dec 04, 2014
If you adhere to the premise that intelligence has a physical basis, then any speculation over what consciousness is other than physical is eliminated.


But I have not speculated on what consciousness is other than upon a physical basis,..... I'm only objecting to those who presume is has NO physical basis and thus justifying denying it even exists or is an guiding element in intelligence. They're the ones speculating on what it is NOT.
antialias_physorg
5 / 5 (1) Dec 04, 2014
Evolution never happened either in cosmic space or on Earth.

Bold statement. And pretty much at odds with every experiment conducted in that regard (and every archaeological evidence ever found). Care to explain what you base this statement on?

The living world is created at the beginning completed and fully functional and gradually degrades over time due to entropy.

Ah. Common fallacy. Entropy does NOT mean uniform homogeneity. (And arguing that a god created stuff...really?...I mean...really? Are you sure you're not better off visiting some preschool or religious site?)
Sigh
not rated yet Dec 04, 2014
You made the test and its answer, therefore you understand it, ....

No. I know the principle, I can develop an algorithm that checks whether the solution is correct, but I don't understand the solution myself, ...

It appears that the distinction here is in the difference between Deductive Reasoning, on which mathematics is based, and Inductive Synthesis, on which science is based.

Don't think so. All Euclidean geometry is deduction from its five axioms, but understanding the axioms doesn't guarantee understanding anything that is derived from them.
Noumenon
not rated yet Dec 04, 2014
You made the test and its answer, therefore you understand it, ....

No. I know the principle, I can develop an algorithm that checks whether the solution is correct, but I don't understand the solution myself, ...

It appears that the distinction here is in the difference between Deductive Reasoning, on which mathematics is based, and Inductive Synthesis, on which science is based.

Don't think so. All Euclidean geometry is deduction from its five axioms, but understanding the axioms doesn't guarantee understanding anything that is derived from them.


As I said, only because you as a human have not bothered to carry out the Deduction. The point was that nothing new can be learned not already implicit in your starting premises,... not that no deduction is required given that axiomatic system. The question was one of 'going beyond' a given intelligence, correct?
TheGhostofOtto1923
1 / 5 (1) Dec 04, 2014
you could summarize?
I did. 'Consciousness is an illusion' -says a preeminent authority in the very field you insist is relevant - sciphilosophy.
Perhaps you could refrain from being insulting awhile at the same time expecting me to click on your links
Well I was very pleasant and congenial the first few times I posted it wasnt I? I find the pretentiousness w.r.t. (re) philobabble very annoying but have shown considerable restraint the past.
It is indeed perplexing to me how one could deny existing, the only thing we can be certain as existing, our own self awareness
Well of course you do - you havent bothered to find out WHY by at least watching Dennetts TED presentation, which itself is a summary.

This says reams about your sincerity re (w.r.t.) the fields of science-related philosophy - you dont even bother to find out what real philos are doing in it. You only pick and choose those which fit your preconceived (a priori) notions of what is true.
cont>
krundoloss
not rated yet Dec 04, 2014
Many of your statements are philosophical. "A mind cannot create a greater mind" etc. How then might one explain how the average intelligence of humans keeps increasing as time marches on?

Learning machines can take a logical flow to create an "understanding" of the world and various subjects. It will not take 100 years because several methods will allow us to converge on a solution much sooner than that. Some say it will take 100 years, and we talk about people sitting at a computer, writing code manually. That method could be replaced with a much faster and more direct method. With a direct computer-to-brain interface, how much faster could programmers work?

And as others have said, it is doubtful that an AI would want to destroy or replace mankind. They aren't competing for the same resources (presumably), and therefore have no reason to attempt to destroy us. We are far more likely to try to destroy them....
Noumenon
not rated yet Dec 04, 2014
but understanding the axioms doesn't guarantee understanding anything that is derived from them.


It does in fact guarantee that because that's what deductive reasoning means!
Modernmystic
not rated yet Dec 04, 2014
Isn't this Eikka point in essence?


I actually thought about the incompleteness theorem earlier in the discussion, however I don't think it applies here. It simply means that knowledge will always be incomplete, it doesn't mean that we can't go forward with using an incomplete theory to construct a perfectly functional technology. In fact this has been the rule rather than the exception in human history.
TheGhostofOtto1923
1 / 5 (1) Dec 04, 2014
Wow a post disappeared during editing.
Modernmystic
not rated yet Dec 04, 2014
I'd still like someone from the "other side" to explain why it is we can recognize greater intelligence in other human beings than ourselves if it's supposedly impossible to devise a test for it; or how it's impossible that we might GE dogs, chimps, or even humans to be more intelligent than they currently are, or even more intelligent than the designers themselves. It's seems pretty uncontroversial that we can manipulate genes and since genes (at least in part) determine intelligence this is entirely feasible.
Noumenon
not rated yet Dec 04, 2014
Isn't this Eikka point in essence?


I actually thought about the incompleteness theorem earlier in the discussion, however I don't think it applies here. It simply means that knowledge will always be incomplete, it doesn't mean that we can't go forward with using an incomplete theory to construct a perfectly functional technology. In fact this has been the rule rather than the exception in human history.


I only mentioned it in the context of the specific discussion between Eikka and Sigh. Is it not relevant in their respective quotes? Eikka seems to be saying that an algorithmic system (A.I.) can not go outside itself. Is this not what the incompleteness theorem is after all?

you could summarize?

I did. 'Consciousness is an illusion' -says a preeminent authority in the very field you insist is relevant - sciphilosophy.


I meant summarize why he thinks that. Any problem can be solved by denying it exists.
TheGhostofOtto1923
1 / 5 (1) Dec 04, 2014
I meant summarize why he thinks that. Any problem can be solved by denying it exists.
HE summarizes it in his TED talk - why would you depend on my paraphrasing what an expert IN YOUR OWN FIELD has to say?? Do you have such a selective disregard for evidence?
I meant summarize why he thinks that. Any problem can be solved by denying it exists
-And you think that a person like Dennett is just dismissing consciousness, without a convincing argument?

You are certainly denying Dennetts arguments by not listening to them.
krundoloss
not rated yet Dec 04, 2014
I have read through these comments, and I cannot understand what everyone is debating. Despite what anyone says, or their reasons for it, Humans can, given enough time, create something that resembles artificial intelligence, at the very least something that is indistinguishable from real intelligence. Getting into the philosophy of "what is intelligence", etc, is beside the point. If the thing can "think" and "draw conclusions" and "be creative", then it is intelligent.
TheGhostofOtto1923
1 / 5 (1) Dec 04, 2014
AI would need to have a purpose. And there is no greater purpose than self-preservation.

With humans and all other animals it is survival to reproduce. With AI it need only be survival. With computer systems we are already building in protection software. Other systems are tasked with identifying threats and protecting other systems. Further, software is being used to identify and attack the attackers at the source.

The integration of these separate systems, and the inclusion of the ability to autonomously upgrade identification, protection, and attack, will lead to the emergence of AI.

In this regard it will operate no differently than any other lifeform; protecting itself against adversaries. But it will have no need to reproduce, only survive.
TheGhostofOtto1923
1 / 5 (1) Dec 04, 2014
Humans can, given enough time, create something that resembles artificial intelligence, at the very least something that is indistinguishable from real intelligence
We already have programs which do a fair job of mimicking human 'intelligence'. But why bother? Our 'intelligence' is based on survival to reproduce. AI need not resemble the way humans think in order to be functional. It will surpass us.
Noumenon
not rated yet Dec 04, 2014
Perhaps you could summarize it?

I did. 'Consciousness is an illusion' -says a preeminent authority in the very field you insist is relevant - sciphilosophy.


I meant summarize why he thinks that.

HE summarizes it in his TED talk - why would you depend on my paraphrasing what an expert IN YOUR OWN FIELD has to say?


So the answer is no? As I said, I'm not going to engage in a debate with the internet, nor am I going to do all the work for you. Why does he draw that conclusion?
TheGhostofOtto1923
3 / 5 (2) Dec 04, 2014
So the answer is no? As I said, I'm not going to engage in a debate with the internet, nor am I going to do all the work for you. Why does he draw that conclusion?
So you see what youre doing... youre rejecting evidence because you dont like the FORM in which it is presented to you. You dont want to watch a video by an expert, you want it spoon-fed to you by someone somewhat less informed than the EXPERT IN YOUR OWN FIELD who made it.

People here present evidence in the form of links all the time. Youre being immature by refusing to review it.Are you lazy? Are you deaf and cant hear videos perhaps?

Heres the vid
http://www.ted.co...guage=en
As I said, I'm not going to engage in a debate with the internet
... 'a debate with the internet...' -So when YOU post links to despaganat and your other sources we can refuse to look at them and instead demand paraphrasing I guess.
Modernmystic
not rated yet Dec 04, 2014
I only mentioned it in the context of the specific discussion between Eikka and Sigh. Is it not relevant in their respective quotes? Eikka seems to be saying that an algorithmic system (A.I.) can not go outside itself. Is this not what the incompleteness theorem is after all?


I agree depending on how "go outside itself" is defined.

Human beings have and continue to improve their knowledge despite having no higher "authority" to appeal to in order to insure they "know" they've improved this design or that design. Reality settles those questions without esoteric tests to prove or disprove them. I may be missing something, and if I am I apologize for my thick head, but I really don't see how any of this is controversial or difficult.

I think he's talking more about a theory of knowledge rather than making a specific technology and refining it. The two are related, but not equivalent even when speaking of intelligence itself.
krundoloss
not rated yet Dec 04, 2014
AI would need to have a purpose. And there is no greater purpose than self-preservation.

We already have programs which do a fair job of mimicking human 'intelligence'. But why bother? Our 'intelligence' is based on survival to reproduce. AI need not resemble the way humans think in order to be functional. It will surpass us.


A few points:
-Any conscious being struggles with purpose. We say self-preservation is our purpose, but really that is just an effect of our biological form. If you could not be killed, what then, would be your purpose?
-Why Bother? Yes, indeed, why bother doing anything. Because we can! Also, as our foundation of knowledge grows, we could put an artificial intelligence to use, bounce Ideas off of it, use it an an analytical tool, much like we use computers now, but it would be easier and simpler for us to interact with an AI.
-It will surpass us, no doubt. But then again, it could teach us, to a point. The movie "Her" seems about right...
TheGhostofOtto1923
1 / 5 (1) Dec 04, 2014
-And heres a transcript in case youre handicapped in some way
http://www.ted.co...guage=en

-Perhaps you are just not prepared to have your 19th century notions of consciousness threatened by 20th century science. I understand.
TheGhostofOtto1923
1 / 5 (1) Dec 04, 2014
-Any conscious being struggles with purpose. We say self-preservation is our purpose, but really that is just an effect of our biological form. If you could not be killed, what then, would be your purpose?
But AI could be killed by competing AI. Hackers are corrupting and destroying software and data at unprecedented rates. Self-programming software which can counter such threats will become the only way of responding quickly and effectively enough. This escalation will lead to AI.

Remember the flash crash? No human can act quickly enough to prevent such things. We are giving AI the ability and the authority to do it because it is vital to OUR survival.
But then again, it could teach us, to a point. The movie "Her" seems about right...
-Or like Data from star trek?
https://www.youtu...UFan6iwg

-Data was on a quest to become more human. But in reality that would only be for our benefit, not his.
Noumenon
not rated yet Dec 04, 2014
So when YOU post links to despaganat and your other sources we can refuse to look at them and instead demand paraphrasing I guess.


My ref links are embedded within my comment, and are mainly for further info,... not to make my point for me. I already indicated above that I may watch it, I just don't have time at present, and expected you to make your point, if you have one, within this thread. This is a comment section, not a bibliography. Also, I want to make sure YOU understand your own references point and its relevancy to a point I've made that you object to.
Sigh
not rated yet Dec 04, 2014
but understanding the axioms doesn't guarantee understanding anything that is derived from them.

It does in fact guarantee that because that's what deductive reasoning means!

So my teachers shouldn't have bothered teaching me geometry, they should have just given me the axioms, and that would have guaranteed that I can prove anything that can be proved in Euclidean geometry? There is a big difference between a conclusion logically following from some premises and someone's ability to understand the line of reasoning leading to that conclusion. If deduction guaranteed understanding, nobody would commit logical fallacies. You are empirically wrong.
Sigh
not rated yet Dec 04, 2014
⇒ ......iow, 'you don't understand the solution yourself' only because you have not carried out the mechanical steps to derive it,... but you DID provide the principals, the starting axioms. Now Kurt Gödel's incompleteness theorem exposed the inherent limitations of all axiomatic (....algorithmic) systems,.. that there are always truths not provable in a given system, and so all such systems are either incomplete or inconsistent. To amend it, you have to then go outside that system, .... and again,... etc. Isn't this Eikka point in essence?

No, because of the difference between conclusions that can be derived in principle, and the conclusions that any one cognitive system can derive. To use a simple example, if a deduction needs more memory than is available to that system, then the system's ability to apply the rules doesn't help. That sort of limitation is likely to hit before incompleteness ever becomes relevant.
Noumenon
not rated yet Dec 04, 2014
There is a big difference between a conclusion logically following from some premises and someone's ability to understand the line of reasoning leading to that conclusion. If deduction guaranteed understanding, nobody would commit logical fallacies.


We are talking about A.I. correct? If so, none of that would be relevant, as machines are not intellectually lazy, nor make logical mistakes. In any case I think you're missing the original point,... in deductive reasoning All the information is already implicit in the axioms, not out side of that system.

No, because of the difference between conclusions that can be derived in principle, and the conclusions that any one cognitive system can derive. To use a simple example, if a deduction needs more memory than is available to that system, then the system's ability to apply the rules doesn't help.


But that is a practical limitation, and it seems to support Eikka?
Camphibian
not rated yet Dec 04, 2014
What is AI anyway? Are we talking about consciousness or a smart piece of hardware that can optimise battery charging or find cats on the internet? If we are talking about consciousness, then clearly it is possible since we exist and I'm supposing that many of you if not all are conscious, and possibly are even intelligent. The existence of an artificial machine that exhibits similar characteristics is therefore not ruled out by the nature of the universe.
Evolutionary game theory would suggest that altruism and cooperation are the winning strategies. I argue that any consciousness whether artificial or human made will adopt those strategies. It's only a matter of time, Mr. Anderson.
Noumenon
not rated yet Dec 04, 2014
Evolutionary game theory would suggest that altruism and cooperation are the winning strategies.


LOL. Well that's the political leftist propagandized version, spoon fed to liberals in the making. They don't want you to know the truth,... that the core operative mechanisms in evolution are Egoism and Competition, survival of the fittest,.... because these are also key operative elements in free market capitalism and liberty.
Sigh
5 / 5 (1) Dec 04, 2014
We are talking about A.I. correct? If so, none of that would be relevant, as machines are not intellectually lazy, nor make logical mistakes

The dumb machine I am typing isn't, but the best model we have for intelligence at the moment is human, and AI modelled on that would have similar traits. Even without laziness, there would be cognitive limitations because any physically instantiated intelligence has them. There would be approximations when the exact solution takes too much time.

In any case I think you're missing the original point,... in deductive reasoning All the information is already implicit in the axioms, not out side of that system.

But to use it, you must make it explicit, and there is no guarantee you can, quite separately from incompleteness.

But that is a practical limitation, and it seems to support Eikka?

No, because if you have X amount of memory and you know how it works, you don't need 2X amount of memory to give 2X to an AI.
Noumenon
not rated yet Dec 04, 2014
We are talking about A.I. correct? If so, none of that would be relevant, as machines are not intellectually lazy, nor make logical mistakes


The dumb machine I am typing isn't, but the best model we have for intelligence at the moment is human, and AI modelled on that would have similar traits. Even without laziness, there would be cognitive limitations because any physically instantiated intelligence has them. There would be approximations when the exact solution takes too much time.


Excellent point. Does that then imply that 'strong A.I.',.... may not be algorithmically based?
Sigh
not rated yet Dec 04, 2014
Evolutionary game theory would suggest that altruism and cooperation are the winning strategies.

They don't want you to know the truth,... that the core operative mechanisms in evolution are Egoism and Competition, survival of the fittest,.... because these are also key operative elements in free market capitalism and liberty.

Both too simple. Winning strategies depend on payoffs, and there is often frequency-dependent selection, but egotism and competition are assumed in the analyses that say cooperation is an important part of the mix. Read Robert H. Frank's book "Passions within Reason" and a few papers on behavioural game theory. There is no need for "they don't want you to know the truth" conspiracy theories. The currently best known approximation to truth is actually out there, and not being hidden.

The one thing that is demonstrably a myth is homo economicus, except, within limits, among two groups, one being economists.
Sigh
not rated yet Dec 04, 2014
Excellent point. Does that then imply that 'strong A.I.',.... may not be algorithmically based?

I am not sure what the alternative would be. Anyway, it is possible to write algorithms for approximations (many numerical methods in mathematics are approximations to whatever degree of accuracy you need) and heuristics. I don't see how the mere fact that shortcuts can pay off in a physically instantiated system with real world limitations would fundamentally change the architecture.
Noumenon
not rated yet Dec 04, 2014
Winning strategies depend on payoffs, and there is often frequency-dependent selection, but egotism and competition are assumed in the analyses that say cooperation is an important part of the mix.


Cooperation and altruism are already subsumed under egoism and competition, not the other way around. Man cooperates and is altruistic to the extent that he benefits from it somehow.

Liberals tend to equate egoism with 'greed' or they do so to obfuscate the fact that egoism played out in an arena of freedom and liberty, is a powerful force and is why capitalism and western society is so resoundingly successful. IOW, If one even bothers mentioning altruism and cooperation within that context, they are exposing their political bias.
Sigh
not rated yet Dec 04, 2014
Attempt to edit created accidental duplicate, can't delete.
Noumenon
not rated yet Dec 04, 2014
Excellent point. Does that then imply that 'strong A.I.',.... may not be algorithmically based?

I am not sure what the alternative would be.


Exactly, .....without understanding what consciousness is and how it comes about given the physical brain, one can only invoke a faith that intelligence can be algorithmically reproduced (strong AI), when in fact there is no scientific basis for that faith.
Sigh
not rated yet Dec 04, 2014
I don't see how the mere fact that shortcuts can pay off in a physically instantiated system with real world limitations would fundamentally change the architecture.

I have to rephrase that. Of course limitations have an impact, but I don't see why that would come to the point where you would have to abandon algorithms, and I would have to ask for what?
Exactly, .....without understanding what consciousness is
I was only discussing inteliigence. Whether consciousness is needed for intelligence is a different question.

and how it comes about given the physical brain, one can only invoke a faith that intelligence can be algorithmically reproduced (strong AI), when in fact there is no scientific basis for that faith.

Do you have reason to think it could not work, and do you have an alternative?
Noumenon
not rated yet Dec 04, 2014
Exactly, .....without understanding what consciousness is and how it comes about given the physical brain, one can only invoke a faith that intelligence can be algorithmically reproduced (strong AI), when in fact there is no scientific basis for that faith.

I was only discussing inteliigence. Whether consciousness is needed for intelligence is a different question.

There is no intelligence without consciousness. You're making the unfounded presumption that consciousness is not a key element in intelligence. Why did we evolve to be self-aware if it is not a key mechanism?

[my posts are in the context of strong A.I.,... that an actual intelligence could be created by man through algorithmic means]
Sigh
5 / 5 (1) Dec 04, 2014
There is no intelligence without consciousness.

Do you rely on one data point there, humans? Even then, can you separate the contributions of conscious and non-conscious mechanisms to intelligence? Can you show that consciousness is needed in any conceivable cognitive architecture?

You're making the unfounded presumption that consciousness is not a key element in intelligence.

Not at all, I merely don't assume it's necessary until I have a reason. I do have a paper on the topic, but haven't yet had time to read it.

Why did we evolve to be self-aware if it is not a key mechanism?

How do you know we did? Damasio claims consciousness is an accidental side effect. I never understood that part of his argument about consciousness, so I can't comment, but he seems to have good support for the rest.
Noumenon
not rated yet Dec 04, 2014
without understanding what consciousness is and how it comes about given the physical brain, one can only invoke a faith that intelligence can be algorithmically reproduced (strong AI), when in fact there is no scientific basis for that faith.

Do you have reason to think it could not work, and do you have an alternative?

As I said, without an understanding of the role of consciousness in intelligence or how it comes about,... what reason would I have that it may work,... by shear luck or coincidence?
hllclmbr
5 / 5 (1) Dec 04, 2014
Obama has forced Russia to deploy exactly one warhead more than USA does, for reasons only Facebook users seems to understand.


What is this drivel?
TheGhostofOtto1923
1 / 5 (1) Dec 04, 2014
My ref links are embedded within my comment... not to make my point for me... I just don't have time at present
What - 15 minutes??
and expected you to make your point, if you have one, within this thread
I DID. Consciousness is an illusion. Refer to experts for more info. I don't have to know all that they know, or pretend to. I respect their opinions and understand that they put a LOT more work into generating them than you or I ever will.
This is a comment section, not a bibliography. Also, I want to make sure YOU understand your own references point
Well how could you do that if you don't visit the source where it comes from?

The bigger question here is why aren't you familiar with the work of what may the preeminent authority on the subject? You are talking about something you have obviously not kept up with.

Like I say, your opinions w.r.t. (re) consciousness are obviously outdated by serious researchers within your own discipline/hobby.
TheGhostofOtto1923
1 / 5 (1) Dec 04, 2014
no intelligence without consciousness
-But you have failed to define either of these terms, and so you can claim most anything about them at all can't you? Provide references please to concise defs by experts which enables you to claim such a relationship.

This might take you more than 15 minutes.
TheGhostofOtto1923
1 / 5 (1) Dec 04, 2014
Here's the opinion of a genuine philo on the subject.

2011-06-30 Tristan Cunha Tufts university
"Is intelligence a prerequisite for consciousness, or vice versa?

"Unfortunately, we don't have agreed upon and tested definitions for both intelligence and consciousness, and can't even definitively state whether any arbitrarily chosen subject falls in to either category. We only have one set of subjects (humans) that we can say with surety are both conscious and intelligent, and that limits our ability to answer this question. But given these limitations, can we imagine a person who's intelligent, but not conscious? What about conscious, but not intelligent?"
http://philpapers...?tId=672
Scroofinator
not rated yet Dec 04, 2014
no intelligence without consciousness
-But you have failed to define either of these terms

IMHO, the distinction is quite simple:

Intelligence is the ability to reason

Consciousness is the ability to contemplate existence
viko_mx
not rated yet Dec 05, 2014
@antialias_physorgnot

"Bold statement. And pretty much at odds with every experiment conducted in that regard (and every ). Care to explain what you base this statement on? "

"To see reality as it is not a question of boldness but of adequacy. There are no archaeological evidence which can be interpreted in a strongly singular way. Dating methods are too unreliable. You believe too much of main stream science and it is your fault.

"Ah. Common fallacy. Entropy does NOT mean uniform homogeneity. (And arguing that a god created stuff...really?...I mean...really? Are you sure you're not better off visiting some preschool or religious site?)"

Of course that entropy does not mean homogeneity in the system. However, the decrease in entropy or local increase in order is related to the preliminary idea and intelligent action. Not by random events.
viko_mx
not rated yet Dec 05, 2014
@antialias_physorgnot

To increase the order in certain areas of the system is needed intelligent approach, preliminary design and specific technology. Due to the laws of physics elementary particles can build crystal lattices or simple inorganic or organic molecules without intelligent intervention, but only so. For complexes functional structures is necessary introduction of information and controlled energy in a certain way (technology) to achieve the the preliminary idea. Do you think that it is better idea to not comment the educational qualifications of strangers?
antialias_physorg
not rated yet Dec 05, 2014
To increase the order in certain areas of the system is needed intelligent approach,

No. Order can grow locally(!) without an intelligent approach (the planet you're standing on is a perfect example of this. As is the solar system it's in. As is the galaxy it's part of...).
You have not understood the laws of physics.

The point is that there are several forces at work. If only one force were at work then no local entropy increase would be possible. However, we do not live in a one-force universe (lucky for us, because we could not exist in such a universe).
viko_mx
not rated yet Dec 05, 2014
@antialias_physorg

Order can grow locally thanks to physical laws and random events to a very low level of complexity and functionality. For the emergence of highly organized functional structures of matter is required a preliminary idea and intelligent intervention. To overcome the increase in entropy in certain zones of our universe over the time, it is necessary to carry out work on certain direction based on preliminary considered plan and by introducing the necessary controlled energy and matter.
What proves your example with the Earth? It is designed instead of occurred celestial body. You can not give an example of randomly formed due to the physical laws physical structure that is actually created. Bodies in the solar system are too heterogeneous in its internal structure and chemical composition to arise from an imaginary homogenous proto cloud as stipulated in the official theory only thnaks to the laws of physic.

viko_mx
not rated yet Dec 05, 2014
@antialias_physorg

What does it matter whether in one system will operate one or more forces to his ability to maintain or create more from less order? The world has shown constant tendency to increase the entropy over time despite many different physical laws and force interactions. Thus was created the world and people constantly and consciously counteract this trend.
You understand that you can not say that someone unknown to you man does not know the laws of physics just because he do not agree with your point of view. This is not a mature and adequate attitude. Which physical laws I do not understand? Which allows the evolution of the systems?
Noumenon
not rated yet Dec 05, 2014
There is no intelligence without consciousness.


Do you rely on one data point there, humans? Even then, can you separate the contributions of conscious and non-conscious mechanisms to intelligence? Can you show that consciousness is needed in any conceivable cognitive architecture?

Every intellectual achievement ever accomplished by humans were done while we were awake,... and close enough to none were accomplished while we were asleep. Statistically and sarcastically speaking, this can not be regarded as just a coincidence.
Noumenon
not rated yet Dec 05, 2014
You're making the unfounded presumption that consciousness is not a key element in intelligence.


Not at all, I merely don't assume it's necessary until I have a reason.

Which is equivalent to what I said. You have a reason, ....consciousness is a phenomenon of the brain. Intelligence is a phenomenon of the brain. Science does not arbitrarily ignore elements of a system that might be interrelated until it can be shown it isn't. It is the whole point of science!

In order to justify setting the phenomenon of consciousness aside as passively independent or merely emergent , one first has to understand it enough to know this.

This is why the Turing Test mentioned above can NEVER be a quantitative verification of strong-AI. At best only that it was able to fool someone. Having Positive knowledge that a true intelligence was created artificially requires understanding how the brain actually achieves it. The strong-AI enthusiasts think it will just magically emerge.
Noumenon
not rated yet Dec 05, 2014
[i] expected you to make your point, if you have one, within this thread

I DID. Consciousness is an illusion. Refer to experts for more info.


That is not an explanation, it is just a claim. I could also post a link and say so and so says such and such to the contrary. Having engaged in such a pointless link war we're not any closer to a substantive discussion ourselves.

Also, I want to make sure YOU understand your own reference[d] point

Well how could you do that if you don't visit the source where it comes from?

Even if I did I still would not know that, because you have not explained, Why "Consciousness is an illusion". This is what I mean by, 'I'm not interested in debating the Internet',.... I'm not going to be sent off to research your own point for you then also have to counter it. I'm not doing all the work for you.

I may have already watched it, or not and will. I may have read Dennett's books, or may not have.
antialias_physorg
not rated yet Dec 05, 2014
Order can grow locally thanks to physical laws and random events to a very low level of complexity and functionality.

Wishy-washy conjecture. Name a hard barrier for whatis below that level and what isn't. Name the phyical law (with math!) why you chose that barrier. Please. I'd really like to see you try.
You can not give an example of randomly formed due to the physical laws physical structure that is actually created.

Oh boy. You might want to try looking at the night sky once in a while. Billions upon billions of example to prove you wrong there every night for you to see.
antialias_physorg
not rated yet Dec 05, 2014
What does it matter whether in one system will operate one or more forces to his ability to maintain or create more from less order?

Simple: With just one force you cannot get local minima. And local complexity (or just any inhomogeneity at all) requires local minima.

The world has shown constant tendency to increase the entropy over time despite many different physical laws and force interactions.

Yes it has. Globally. You still do not fully graps the difference between a global trend and a local situation. (Lemme guess - you also don't understand how global warming can lead to locally colder weather in some regions, right? You believe that a global warming must mean every place warms equally, right?)
Noumenon
5 / 5 (1) Dec 05, 2014
I DID. Consciousness is an illusion. Refer to experts for more info. I don't have to know all that they know, or pretend to. I respect their opinions and understand that they put a LOT more work into generating them than you or I ever will.

So, all of a sudden you respect philosophers opinions? Dennett's position on consciousness is not universally accepted by those experts. Dennett has merely found a way of avoiding any explanation,... which is a convenient distraction for use by strong-AI,... and is probably why he was invited to speak at TED.

The bigger question here is why aren't you familiar with the work of what may the preeminent authority on the subject? You are talking about something you have obviously not kept up with.


How would you know,... you refuse to engage me in a discussion.
TheGhostofOtto1923
1 / 5 (1) Dec 05, 2014
That is not an explanation, it is just a claim
Thats right. Dennetts video is the explanation, and I wont presume to translate it for you.
I could also post a link and say so and so says such and such to the contrary
-And doing so would be much more honest and informative than ad libbing and paraphrasing. I can understand your reticence - your sources are easy to discount.
Having engaged in such a pointless link war we're not any closer to a substantive discussion ourselves
Calling it by a derisive name doesnt diminish the usefulness of references and quotes. Attempting to speak for experts (and getting it wrong) seems egoistic.
How would you know,...
-Because you SAY so:
It is indeed perplexing to me how one could deny existing, the only thing we can be certain as existing, our own self awareness
-You have never read any of Dennetts stuff because you are obviously a casual hobbyist who wants to know only what amuses him.
TheGhostofOtto1923
1 / 5 (1) Dec 05, 2014
So, all of a sudden you respect philosophers opinions?
He is more scientist than philo.

"Daniel Clement Dennett III (born March 28, 1942) is an American philosopher, writer, and cognitive scientist... He is the recipient of a Fulbright Fellowship, two Guggenheim Fellowships, and a Fellowship at the Center for Advanced Study in the Behavioral Sciences."
Dennett's position on consciousness is not universally accepted by those experts
-Exactly what experts are you referring to? We know you philos can never agree on anything. You yourself cant even come up with a working definition of either consciousness or intelligence, and neither can they.
Dennett has merely found a way of avoiding any explanation,...
Thats not true. You havent read anything he wrote. How would you know? You havent even take the time to cite experts with this opinion who HAVE reviewed his work, as I do with the stuff you post.
katesisco
not rated yet Dec 07, 2014

from comments above:
It is axiomatic that intelligence can reproduce anything that is produced naturally. The only argument against this is that the laws of physics were different when said phenomena was produced.

Read more at: http://phys.org/n...html#jCp

Actually, isnt it true that on this site phys.org, everyday announcements claim that science now understands long time mysteries that baffled researchers for half a century or more and that in spite of the Higgs announcement, we do not understand the connection between gravity and magnetism? So, no, we cannot reproduce natural effects in toto.
And I believe that our magnetar sun has been regularly reducing energy as each magnetic reversal sheds magnetism. Science now knows life began not once buy twice -comb jellies and not sponges--and that may mean physics as we know it constantly changes.
Do we live on a planet that has already produced an elevated life--intelligent viruses or bacteria?
Sigh
not rated yet Dec 07, 2014
Every intellectual achievement ever accomplished by humans were done while we were awake,... and close enough to none were accomplished while we were asleep. Statistically and sarcastically speaking, this can not be regarded as just a coincidence.

First, awake and conscious are not the same thing. Read Damasio on absence seizures. Second, the correlation can be explained by there being no relevant output while people are asleep. These are two reasons why a correlation between being awake and producing intelligent behaviour does not indicate a causal connection between consciousness and intelligence. Third, your lack of reply to my question about other species' consciousness and intelligence suggests you are relying on one data point: humans. Even if we stipulated that you are right, you might only be right for humans. You really can't generalise from that to all conceivable cognitive architectures.
Sigh
not rated yet Dec 07, 2014
You're making the unfounded presumption that consciousness is not a key element in intelligence.

Not at all, I merely don't assume it's necessary until I have a reason.

Which is equivalent to what I said.

No. I say I don't know whether consciousness and intelligence are connected, and I haven't yet seen a persuasive argument. You say they definitely are connected. Not at all equivalent.

consciousness is a phenomenon of the brain. Intelligence is a phenomenon of the brain.
So is heat, and you don't argue that heat generates consciousness or intelligence.

We got to this from algorithms. Do you assume algorithms can't give rise to consciousness?

Noumenon
not rated yet Dec 07, 2014
First, awake and conscious are not the same thing. [..i.e.] absence seizures

For the context of my point, awake, awareness, and consciousness, is to be equated. However, if it suits you better.... "Every intellectual achievement ever accomplished by humans were done while we were [not having a seizure],... and close enough to none were accomplished while we were [having a seizure]."

Second, the correlation can be explained by there being no relevant output while people are asleep. These are two reasons why a correlation between being awake and producing intelligent behaviour does not indicate a causal connection between consciousness and intelligence.

The lack of output begs the question, why. The lack of a sense of time passing and inability to produce coherent output, the incoherence and seemingly random memory accessing in dreaming,... all are evidence of an unconscious state being responsible for lack of intelligent achievement.
Sigh
not rated yet Dec 07, 2014
Science now knows life began not once buy twice -comb jellies and not sponges

Do you mean two different origins of life, or two different origins of multicellularity?
Noumenon
not rated yet Dec 07, 2014
Do you rely on one data point there, humans?

Yes, I have been.
Even then, can you separate the contributions of conscious and non-conscious mechanisms to intelligence?

I attempted to in the other thread,.... "A conscious human learning an activity for the first time (driving, walking, speaking, reading) is awkward in doing it. Upon having done it many times, the activity becomes 'burned in' so that he can perform it 'autonomously' or subconsciously. The former requires conscious intelligence, while the latter, already presumed complete, requires only unthinking computability and carrying out instructions. How to program a programmer?"

Can you show that consciousness is needed in any conceivable cognitive architecture?


It is the obligation of strong-AI to show that it isn't needed if they are making that claim. Recall that I am only claiming that consciousness is a phenomenon the denial of which or lack of understanding render's strong-AI unjustified.
Noumenon
not rated yet Dec 07, 2014
You're making the unfounded presumption that consciousness is not a key element in intelligence.


Not at all, I merely don't assume it's necessary until I have a reason.


Which is equivalent to what I said.


No. I say I don't know whether consciousness and intelligence are connected, and I haven't yet seen a persuasive argument. You say they definitely are connected.


The point is that strong-AI actively operates on the premise that consciousness is NOT required for intelligence.

Science does not presume there is no connection a-priori,... in fact the point of science is to understand all phenomenon of a given system,... which is to say, to understand the interrelationships or to demonstrate the lack thereof if that is to be claimed. Strong-AI is not waiting for science to do this first. In fact they're anti-science in the sense of denying an obvious phenomenon on the basis of expediency and not positive knowledge.
Noumenon
not rated yet Dec 07, 2014
consciousness is a phenomenon of the brain. Intelligence is a phenomenon of the brain.
So is heat, and you don't argue that heat generates consciousness or intelligence.

[...] Do you assume algorithms can't give rise to consciousness?


I presume only that a lack of scientific understanding of consciousness' role in intelligence, prevents strong-AI from assuming algorithms CAN give rise to consciousness.

It's like claiming that the alchemists should have been able to make gold because there was no scientific understanding that told them they couldn't.
Noumenon
not rated yet Dec 07, 2014
Strong-AI's success does not hinge on my assumptions, but only on their own in the context of their claims.

Given the example of the human brain, there is no question that conscious intelligence is possible on a physical basis. The question is one only of the appropriate form and completeness of knowledge of the example to be replicated.

Noumenon
not rated yet Dec 08, 2014
There is no intelligence without consciousness. - Noumenon


Do you rely on one data point there, humans? - Sigh


Yes, I have been. - Noumenon


I should clarify my response here, as GhostofOtto is under the impression that I think only humans can be conscious.

It would be quite absurd if the human brain evolved essentially different than any other complex animals brain, as nature is efficient. So, yes I suspect that animals have the element of consciousness as a key component of their intelligence.

My response "I have been", past tense, meant only that I have not had any specific examples of animal intelligence in mind.
Huns
not rated yet Dec 08, 2014
People who don't understand how consciousness and intelligence works, who have NEVER done the slightest amount of work with AI, commenting on whether or not AI can *ever, ever* do the very thing they don't understand. LOL.

You think consciousness is a magical property of human brains that machines can never approach because you don't know what it is. You probably have some extremely vague notions, like "being awake" or "being self-aware." But anything concrete, anything to indicate that you understand the systems behind these phenomena? You have nothing to show. You are merely filling in the blanks with made-up garbage, like Dr. Phil.

You should really ask yourself, "Do I have any idea what I'm talking about? Can I prove it to myself dispassionately?" The answer is obviously no. Your ignorance is so obvious to me. You don't know how it works in your brain, yet you want to tell us whether it can work in a machine? Just log off.
Noumenon
not rated yet Dec 09, 2014
If you're referring to me,... I have NEVER stated it couldn't work in a machine. I have stated explicitly that consciousness must have a purely physical basis.

I have only questioned several assumptions made by strong-AI that are in fact scientifically unfounded.

If you are referring to me,... I suggest you learn some reading skills.

You think consciousness is a magical property of human brains that machines can never approach because you don't know what it is

It is strong-AI that needs to understand it if they are the ones claiming to recreate an intelligence. Rather than understanding it, they make the unfounded and arbitrary presumption that it is not a key element in intelligence.

I'm saying machines can never approach (true intelligence) because STRONG-AI doesn't know what it is and actively denies phenomenon.
russell_russell
not rated yet Dec 10, 2014
Life evolves. Humans evolve.
The hallmarks of evolution for all life are not the hallmarks of evolution for any form of A. I.

The evolution of life led to consciousness.
Any aspect of life's evolution will reveal a physical basis.

If you accept that the fundamental driver of evolution is mutation, then you accept that A.I. does not have life's fundamental driver of evolution.

The only mutations A.I. will ever experience are man made or induced 'mutations' [changes].

If the goal of A.I. is *human-like* intelligence, then no alternative other than consciousness will fulfill this goal.

If you know exactly how and where human memory and learning occurs, then you have a model for how consciousness and intelligence occurs. All four will have a "purely physical basis".

Artificial Intelligence will remain artificial unless you fast forward and compress an evolution [as information] that culimnates to life's consciousness and intelligence billions of years in the making.
Huns
not rated yet Dec 11, 2014
If you're referring to me,... I have NEVER stated it couldn't work in a machine. I have stated explicitly that consciousness must have a purely physical basis.

Not you specifically.

It is strong-AI that needs to understand it if they are the ones claiming to recreate an intelligence. Rather than understanding it, they make the unfounded and arbitrary presumption that it is not a key element in intelligence.

That what is not a key element in intelligence?

I'm saying machines can never approach (true intelligence) because STRONG-AI doesn't know what it is and actively denies phenomenon.

First you say that consciousness must have a purely physical basis. Then you say machines cannot achieve "true intelligence" (whatever that is).

If it has a purely physical basis, we can model it and run it in software, or implement it in hardware, or create some hybrid thereof.

It's difficult for me to understand how you could believe the things you claim to.
Noumenon
not rated yet Dec 12, 2014
You don't seem to want to read my posts carefully.

I'm saying machines can never approach (true intelligence) because STRONG-AI doesn't know what it is and actively denies phenomenon.
First you say that consciousness must have a purely physical basis. Then you say machines cannot achieve "true intelligence" (whatever that is).


Again, I never said "cannot achieve "true intelligence"", .... I only established conditions for it to be possible.

Maybe I can clarify;

1) In principle, I believe it IS possible to create an artificial intelligence, ok,... however,

2) It is NOT possible to create an artificial intelligence (in the strong-AI sense, an actual autonomously thinking machine) without first an understanding of how it comes about in humans (or animals). That's it, ok. We do not know the role that consciousness plays in intelligence, if any. I say "if any", however it is clear that consciousness is the key element imo.
Noumenon
not rated yet Dec 12, 2014
Presently, strong-AI makes too many unfounded presumptions, namely that intelligence is algorithmic and so can be reproduced via software, and that consciousness is not a key element to intelligence.
russell_russell
not rated yet Dec 13, 2014
If it has a purely physical basis, we can model it and run it in software, or implement it in hardware, or create some hybrid thereof. - Huns


Absolutely correct.

The model is damage and repair.
http://medicalxpr...ain.html

Safely say literally no one cares for a model emulating damage and repair -
as part of normal brain activity.

Normal brain activity as in intelligence, consciousness, memory and learning.
All physical.
All doable.
As model.
As software.
As Hardware.

So you tell me. No one attempts or follows this approach. As if nonexistence.
A.I. will remain.nonexistence as well until this approach and path is taken.
A.I. is hyper hype. You are simply being entertained.
Huns
not rated yet Dec 16, 2014
Noumenon, or russell russell, whichever you care to be called, as you are obviously the same person - here are direct quotations of things you have said right here:

I have NEVER stated it couldn't work in a machine. I have stated explicitly that consciousness must have a purely physical basis.

Agreed.

I'm saying machines can never approach (true intelligence)

If it MUST have a purely physical basis, it MUST conform to the laws of physics. We can simulate physical systems with computers. Therefore, it MUST, given sufficient computational power, be possible to simulate intelligence and consciousness and whatever other brain functions we like.

Safely say literally no one cares for a model emulating damage and repair

Molecular simulation would already encompass that, and such systems are under active development today.
Noumenon
not rated yet Dec 16, 2014
@Huns, or Sigh or GhostofOtto1923, whichever you care to be called, as you are obviously the same person [//sarcasm],...

If it MUST have a purely physical basis, it MUST conform to the laws of physics.


Agreed.

We can simulate physical systems with computers.


Agreed,..... but only to the extent that those physical systems are themselves understood.

Therefore, it MUST, given sufficient computational power, be possible to simulate intelligence and consciousness and whatever other brain functions we like.


If you merely mean to emulate intelligence or even emulate consciousness to the extent of passing the Turing test, of fooling an observer, then we have no disagreement.

However, since you mentioned 'laws of physics', it appears you may mean more than this,.. that the A.I. system would then be a real autonomous conscious thinking intelligence,.... as in strong-AI, .... of which I have been speaking.


Noumenon
not rated yet Dec 16, 2014


If you expect that strong-AI could create an autonomously conscious thinking intelligence, on a computer,... then you're making a few unfounded presumptions...

You're assuming that conscious intelligence is computable,... that it operates on the basis of carrying out of instructions. In fact it may not be computable.

You're assuming that simulating physical laws renders those physical laws operative. It does not. Simulating a grasshopper, no matter how completely, does not mean that you have created a living grasshopper. The simulated grasshopper can not be said to be alive,... because the physical form has been changed. It may be that conscious intelligence requires a specific physical form.

The basis of my argument above has been simply that consciousness is not understood at present. That the role of consciousness in intelligence is not understood. As you said, if we know the physical laws governing a phenomena then it can be simulated,... but we don't.
russell_russell
not rated yet Dec 17, 2014
Molecular simulation would already encompass that, and such systems are under active development today. - Hun


No A.I. proponent has molecular simulation in mind for intelligence, consciousness, memory and learning. So no A.I. will occur in your lifetime. No A.I. proponent thinks damage is normal for intelligence, consciousness, memory and learning. No A.I. proponent will use damage to create intelligence, consciousness, memory and learning.

Damage is govern by physical law. You don't need to simulate damage. Damage happens all the time everywhere. When you control damage you create life, life's evolution, and the result you label intelligence, consciousness, memory and learning.

Any and all repair attempted by any biological life form in answer to damage is marked by a lesion (no repair is perfect). All lesions are inheritable. All gene expression is the result of lesions.

Can you imagine what goes on in the mind of an A.I. proponent when they read the above?

russell_russell
not rated yet Dec 17, 2014
They are dumbfounded. They draw a blank.

Typo corrections the aforementioned comment:
govern=governed
russell_russell
not rated yet Dec 17, 2014
That the role of consciousness in intelligence is not understood. - N


With all due respect, bullshit.

Consciousness is damage recorded. Recalled or retrieved after being repaired.
Eons of conjecture and philosophy are dispelled.

There is a probability you will be intelligent. Depending on where, when, what, how and how often molecular repair occurs on pairs of bases all life possesses.

So what are the prerequisites for consciousness? One is neurons. More than one. They don't divide (replicate) . Cell division destroys or distributes accumulated damage.

You don't find this approach or attitude towards damage in computer science.
Just the opposite. Taking them further and further away from their self-proclaimed holy grail of duplicating consciousness and intelligence.

Damage does not need a specific physical form.
Huns
not rated yet Dec 22, 2014
Agreed,..... but only to the extent that those physical systems are themselves understood.

If the simulation is at the molecular level, it will simulate things whether we understand them or not. Watching it run is one of the ways we WILL understand. Perhaps even some quantum simulation is needed. This too can be observed and used to figure out what is actually necessary to simulate. Most of a neuron's DNA is concerned with biological function rather than information processing.

You're assuming that conscious intelligence is computable,... that it operates on the basis of carrying out of instructions. In fact it may not be computable.

If it's in this universe, it can be simulated in a computer. If it's not "computable" (simulatable) by a computer with adequate resources, then it doesn't exist in this universe.
Huns
not rated yet Dec 22, 2014
You're assuming that simulating physical laws renders those physical laws operative.

Actually, the laws will be operative in the simulator.

Simulating a grasshopper, no matter how completely, does not mean that you have created a living grasshopper.

Within the universe of the simulator, it will be quite alive. It will behave the same way that it would if it was a physical grasshopper.

It may be that conscious intelligence requires a specific physical form.

A form which relies on the laws of physics, which we can simulate, and therefore produce virtually.
Huns
not rated yet Dec 22, 2014
No A.I. proponent has molecular simulation in mind for intelligence, consciousness, memory and learning.

Utterly false: http://en.wikiped..._Project

So no A.I. will occur in your lifetime.

You think no one is working on a molecular simulation, even though millions have been spent on that exact thing. You also don't know how long I will live. As such, you have no basis to suppose what will and won't happen with AI during my lifetime.

No A.I. proponent thinks damage is normal for intelligence, consciousness, memory and learning. No A.I. proponent will use damage to create intelligence, consciousness, memory and learning.

You don't know the mind of every AI proponent. In any case, a molecular simulation will indeed simulate those effects. They intend to simulate the operation of DNA.
Huns
not rated yet Dec 22, 2014
Consciousness is damage recorded. Recalled or retrieved after being repaired.

What do you base this claim on?

Eons of conjecture and philosophy are dispelled.

I think you should be on this site instead ---> http://timecube.com
russell_russell
not rated yet Dec 24, 2014
What do you base this claim on? - H


You have not done your homework. No one asks this. No one that is informed.
http://medicalxpr...ies.html

David Glanzman is correct. He will eventually discover memory is harbored as repair done with DNA isoforms.
http://medicalxpr...ain.html
That is the base of the claim stated here.

There are more than ten thousand related research papers that support this.

You recommend a link and site that is a direct reflection on you and your knowledge.
Not even JVK stoops this low.

Noumenon
not rated yet Dec 31, 2014
You're assuming that conscious intelligence is computable,... that it operates at on the basis of carrying out of instructions. In fact it may not be computable.

If it's in this universe, it can be simulated in a computer. If it's not "computable" (simulatable) by a computer with adequate resources, then it doesn't exist in this universe.

According to your naïveté, perhaps. However, without invoking mere faith that it is so, it is not possible to maintain that, without first understanding how consciousness comes about. It 'may be' so at best, but then again it does not follow from a logical necessity. Roger Penrose has shown that it is possible for a system to be deterministic without being algorithmic.
Noumenon
not rated yet Dec 31, 2014
That the role of consciousness in intelligence is not understood. - N


With all due respect, bullshit.

Consciousness is damage recorded. Recalled or retrieved after being repaired.
Eons of conjecture and philosophy are dispelled.

There is a probability you will be intelligent. Depending on where, when, what, how and how often molecular repair occurs on pairs of bases all life possesses.

So what are the prerequisites for consciousness? One is neurons. More than one. They don't divide (replicate) . Cell division destroys or distributes accumulated damage.

[...]

Damage does not need a specific physical form.


Are you referring to memory, or consciousness? There seems to be a large gap between the mechanism of 'molecular repair', which I don't doubt, ....and consciousness. Do you have a ref that would explain the link?
russell_russell
not rated yet Jan 02, 2015
http://medicalxpr...ain.html

If you associate normal brain activity with consciousness "...a large gap..."
narrows.

What is your definition of consciousness?
You have quoted my definition above.
Consciousness is the retrieval or recall of damage recorded after repair.
Repair (of damage) is stored. This storage is conventionally labeled memory.

This replaces the belief that memory resides at the synapse.
.
"Long-term memory is not stored at the synapse," said David Glanzman, a senior author of the study, and a UCLA professor of integrative biology and physiology and of neurobiology.

http://medicalxpr...ies.html

As opposed to...
http://phys.org/n...258.html
...the same researcher seven years later.

The link provides the reference.
The reference provides the original research.
The original research provides further references.

russell_russell
not rated yet Jan 02, 2015
This is a commentary thread.
If you want to lose 99.9% onlookers simply use research specific jargon.
Or join a forum.

russell_russell
not rated yet Jan 02, 2015
Typo corrections CAPITALIZED:

As opposed to...
http://phys.org/n...258.html
...the same researcher seven years PRIOR.
Noumenon
not rated yet Jan 05, 2015
If you associate normal brain activity with consciousness "...a large gap..."
narrows.

What is your definition of consciousness?
You have quoted my definition above.


The phenomenon of awareness, ....the intentional retrieval of memory, ....the controlling of bodily movements on account of a desire to do so for a pre-conceived purpose,.... reactionary response to external events that are more complex than those that can be regarded as involuntary,... etc.

Consciousness is the retrieval or recall of damage recorded after repair.
Repair (of damage) is stored. This storage is conventionally labeled memory.


"What" directs the retrieval and recalling then, in a coordinated fashion? During dreams this seems to occur randomly because we are 'unconscious'. I believe the brain must on account of it's functioning constantly retrieve memories, however, this does not occur randomly while we are awake (aware).
Noumenon
not rated yet Jan 05, 2015
.... instead it is 'directed' by a 'something' through intentionality,.... the 'reasons' behind the conscious retrieval of memories cannot themselves be memories, as in 'canned responses'.
Noumenon
not rated yet Jan 05, 2015
..... I am not presently 'aware' of the vast majority of my memories,... so obviously there is a difference between stored memories and conscious awareness of a memory. If I 'replay', from memory, a past event, what makes that renewed awareness different from it's stored memory state?
russell_russell
not rated yet Jan 05, 2015

"What" directs the retrieval and recalling then, in a coordinated fashion? - N


The order in which damage occurs. Repair does not always follow immediately or at all.
(You can not retrieve or recall the irreparable)

Dreams are not random recall or retrieval.

The more you 'replay' the more damage now as repaired takes part in the cognitive process in progress.
A trivial example:
A non-intentional wrong musical note sung or played. Played or sung again as correct. You've added damage and it's repair on a molecular scale. Myelination occurs where what is used over and over again.

russell_russell
not rated yet Jan 05, 2015
If I 'replay', from memory, a past event, what makes that renewed awareness different from it's stored memory state? - N


The renewal is unique and is now a part of the memory state. No two renewals are alike. Dynamics disallow this. Any 'cue' will do to 'replay' a stored memory state.
A 'static' cue is not biological possible.
Just the opposite of A.I.

russell_russell
not rated yet Jan 06, 2015
Going further...
http://www.nature...-1.15435

Testable metaphysics.
Two camps championing two views.
Here a brief Howard Wiseman excerpt:

"Those who insist that correlations are explicable must conclude that causal influences can go faster than light. A challenge for these non-localists is: why does nature nevertheless conspire to prevent faster-than-light signalling?

Those who hold Einstein's principle to be inviolable (the localists) must conclude that some events are correlated for no reason. A challenge for them is: if correlations do not necessarily imply a cause, when should scientists look for causes, and why?"

The second paragraph is obvious for A.I. crowd. Otherwise A.I. makes no sense.

The first paragraph is subtle. The repair of damage as a prerequisite for consciousness is not obvious. The GUT of biology.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.