Interview: Dr. Ben Goertzel on Artificial General Intelligence, Transhumanism and Open Source (Part 1/2)

Jun 10, 2011 by Stuart Mason Dambrot feature
Dr. Ben Goertzel. Photo courtesy: Neural Imprints (http://www.neuralimprints.com/)

(PhysOrg.com) -- Dr. Ben Goertzel is Chairman of Humanity+; CEO of AI software company Novamente LLC and bioinformatics company Biomind LLC; leader of the open-source OpenCog Artificial General Intelligence (AGI) software project; Chief Technology Officer of biopharma firm Genescient Corp.; Director of Engineering of digital media firm Vzillion Inc.; Advisor to the Singularity University and Singularity Institute; Research Professor in the Fujian Key Lab for Brain-Like Intelligent Systems at Xiamen University, China; and general Chair of the Artificial General Intelligence Conference Series. His research work encompasses artificial general intelligence, natural language processing, cognitive science, data mining, machine learning, computational finance, bioinformatics, virtual worlds and gaming and other areas, Dr. Goertzel has published a dozen scientific books, 100+ technical papers, and numerous journalistic articles, and the futurist treatise A Cosmist Manifesto. Before entering the software industry he served as a university faculty in several departments of mathematics, computer science and cognitive science, in the US, Australia and New Zealand.

Dr. Goertzel spoke with Critical Thought’s Stuart Mason Dambrot following his talk at the recent 2011 Transhumanism Meets Design Conference in New York City. His presentation, Designing Minds and Worlds, asked and answered the key questions, How can we design a world (virtual or physical) so that it supports ongoing learning and growth and ethical behavior? How can we design a mind so that it takes advantage of the affordances its world offers? These are fundamental issues that bridge AI, robotics, cyborgics, and game design, sociology and psychology and other areas. His talk addressed them from a cognitive systems theory perspective and discussed how they’re concretely being confronted in his current work applying the OpenCog Artificial General Intelligence system to control game characters in virtual worlds.


This is the first part of a two-part article. The second part is available at http://phys.org/news/2011-06-dr-ben-goertzel-artificial-intelligence_1.html
SM Dambrot: We’re here with Dr. Ben Goertzel, CEO of Novamente, Leader of OpenCog and Chairmen of Humanity+ [at the 2011 Humanity+ Transhumanism Meets Design Conference in New York City]. Thank you so much for your time.

Dr. Goertzel: It’s great to be here.

SM Dambrot: In your very interesting talk yesterday, you spoke about the importance of the relationship between minds and worlds. Could you please expound on that a bit in terms of Artificial General Intelligence?

Dr. Goertzel: As an AGI developer this is a very practical issue which initially presents itself in a mundane form – but many subtle philosophical and conceptual problems are lurking there. From the beginning, when you’re building an AGI system you need that system to do something – and most AI history is about building AI systems to solve very particular problems, like planning and scheduling in a military context, or finding documents online in a Google context, playing chess, and so forth. In these cases you’re taking a very specific environment – a specific set of stimuli --- and some very specific tasks -- and customizing an AI system to do those tasks in that environment, all of which is quite precisely defined. When you start thinking about AGI – Artificial General Intelligence – in the sense of human-level AI – you not only need to think about a broader level of cognitive processes and structures inside the AI’s mind, you need to think about a broader set of tasks and environments for the AI system to deal with.

In the ideal case, one could approach human-level AGI by placing a humanoid robot capable of doing everything a human body can do in the everyday human world, and then the environment is taken care of – but that’s not the situation we’re confronted with right now. Our current robots are not very competent when compared with the human body. They’re better in some ways – such as withstanding extremes of weather that we can’t – but by and large they can’t move around as freely, they can’t grasp things and manipulate objects as well, and so on. Moreover, if you look at the alternatives – such as implementing complex objects and environments in virtual and game worlds – you encounter a lot of limitations as well.

You can also look at types of environments that are very different from the kinds of environments in which humans are embedded. For example, the Internet is a kind of environment that is immense and has many aspects that the everyday pre-Internet human environment doesn’t have: billions of text documents, satellite data from weather satellites, millions of webcams…but when you have a world for the AI that’s so different from what we humans ordinarily perceive, you start to question whether a AI modeled on human cognitive architecture is really suited for that sort of environment.

Initially the matter of environments and tasks may seem like a trivial issue – it may seem that the real problem is creating the artificial mind, and then when that’s done, there’s the small problem of making the mind do something in some environment. However, the world – the environment and the set of tasks that the AI will do – is very tightly coupled with what is going on inside the AI system. I therefore think you have to look at both minds and worlds together.

SM Dambrot: What you’ve just said about minds and worlds reminds me of two things. One is the way living systems evolved – that is, species evolve not in a null context, but rather, as you so well put it, tightly coupled to in this case an environment niche; every creature’s sensory apparatus is tuned that niche, so the mind and world co-evolve. The other is what you mentioned yesterday when discussing virtual and game worlds – that physics engines are not being used in all interactive situations – which leads me to ask you what you think will happen once true AGIs are embodied.

Dr. Goertzel: If we want to, we can make the boundary between the virtual and physical worlds pretty thin. Most roboticists work mostly in robot simulators, and a good robot simulator can simulate a great deal of what the robot confronts in the real world. There isn’t a good robot simulator for walking out in the field with birds flying overhead, the wind, the rain, and so forth – but if you’re talking about what occurs within someone’s house a lot can be accomplished.

It’s interesting to see what robot simulators can and can’t do. If we were trying to simulate the interior of a kitchen, for example, a robot simulator can deal with the physics of chairs and tables, pots and pans, the oven door, and so forth. Current virtual worlds don’t do that particularly well because they only use a physics engine for a certain class of interactions, and generally not for agent-object or agent-agent interactions – but these are just conventional simplifications made for the sake of efficiency, and can be overcome fairly straightforwardly if one wants to expend the computational resources on simulating those details of the environment.

If you took the best current robot simulators, most of which are open source, and integrated them with a virtual world, then you could build a very cool massive multiplayer robot simulator. The reason this hasn’t happened so far is simply that businesses and research funding agencies aren’t interested in this. I‘ve thought a bit about how to motivate work in that regard. One idea is to design a video game that requires physics – for example, a robot wars game in players build robot from spare parts, and the robots do battle. You could also make the robots intelligent and bring some AI into it, which if done correctly would lead to the development of an appropriate cognitive infrastructure.

Having said that, going back to the kitchen – what would current robot simulators not be able to handle, but would have to be newly programmed? Dirt on the kitchen floor, so that in some areas you could slip more than others; baking, so when you mix flour and sugar and put it in the oven, the chemistry which is beyond what any current physics engine can really do; paper burning in the flame of a gas stove; and so on. The open question is how important these bits and pieces of everyday human life are to the development of an intelligence.

There’s a lot of richness in the everyday human world that little kids are fascinated by – fire, cooking, little animals – because this is part of the environmental niche that humans adapted to. Even the best robot simulators don’t have that much richness, so I think that it’s an interesting area to explore. I think we should push simulators as far as we can, create robot simulators with virtual worlds, and so forth – but at the same time I’m interested in proceeding with robotics as well because there’s a lot of richness in the real world and we don’t yet know how to simulate it.

The other thing you have to be careful of is that most of the work done with robots now completely ignores all this richness -- and I’m as guilty of that as anybody, When we use robots in our lab in China do we let the robots roam free in the lab? Not currently. We made a little fenced-off area, we put some toys in it, and we made sure the lighting is OK because the robots we’re using (Aldebaran Nao robots) cost $15,000 and they have a tendency to fall down. It’s annoying when they break – you have to send them back to France to get repaired.

So, given the realities of current robot technology we tend to keep the robots in a simplified environment both for their protection, and so that their sensation and actuation will work better. They work, they’re cool, and they pick up certain objects well – but not most of those in everyday human life. When we fill the robot lab only with objects they can pick up, we’re eliminating a lot of the richness and flexibility a small child has.

SM Dambrot: This raises two more questions: Is cultural specificity required for any given AGI, and is it necessary to imbue an AGI with a sense of curiosity?

Dr. Goertzel: Our fascination with fire is an interesting example. You wonder to what extent it’s driven by pure curiosity versus our actual evolutionary history with fire – something that’s been going on for millions of years. I think our genome is programmed with reactions to many things in our everyday environment which drive curiosity – and fire and cooking are two interesting examples.

Having said that, yes, curiosity is one of the base motivators. We’re already using that fact in our OpenCog work. One of the top-level demands, as we call them, of our system is the ability to experience novelty, to discover new things. There are two demands: to discover new things in the world around it and just have the experience of learning new things internally, which can come through external or internal discovery. So we’ve already programmed things very similar to curiosity as top-level goals of the system. Otherwise you could end up with a boring system that just wanted to get all of its basic needs gratified, and would then just sit there with nothing to do.

SM Dambrot: That’s very interesting – especially the internal novelty drive. That seems even more exciting in terms of any type of AGI analogue to human intelligence, because we spend so much time discovery ideas internally.

Dr. Goertzel: Some people more than others – it’s cultural to some extent. I think we as Westerners spend more time intellectually introspecting than do people from Eastern cultures. Being from a Jewish background, I grew up in a culture particularly inclined towards intellectual introspection and meta-meta-meta thinking.

On a technical level, what we’ve done to inculcate the OpenCog system with a drive for internal novelty and internal learning and curiosity is actually very simple: It’s based on information theory and is related to work by Jürgen Schmidhuber and others on the mathematical formulation of surprise. In an information-theoretic sense, OpenCog is always trying to surprise itself.

SM Dambrot: I recall that when Prof. Schmidhuber was discussing Recursive Neural Networks at Singularity Summit ’09, he talked about how the system looks for that type of novelty in its bit configurations.

Dr. Goertzel: That’s right – and what we do with OpenCog is quite similar to that. These are ideas that I encountered in the 1980s in the domain of music theory, based on Leonard Meyer’s Emotion and Meaning in Music. He was analyzing classical music – Bach, Mozart and so forth – and the idea he came up with was that aesthetically good music is all about the surprising fulfillment of expectations, which I thought was an interesting phrase. Now, if something is just surprising it’s too random, and some modern music can be like that – modern classical music in particular. If something is just predictable –pop music is often like that, and some classical music seems like that – it’s boring. The best music shows you something new yet it still fulfills the theme in a way that you didn’t quite expect to be fulfilled – so it’s even better than if it just fulfilled the theme.

I think that’s an important aesthetic in human psychology, and if you look at the goal system of a system like OpenCog, the system is seeking surprise but it also gets some reward from having its expectations fulfilled. If it can do both of those at once then it’s getting many of its demands fulfilled at the same time, so in principle it should be aesthetically satisfied by the same sorts of things that people are.

This is all at a very vague level, because I don’t think that surprise and fulfillment of expectations are the ultimate equation of aesthetics, music theory or anything else. It’s an interesting guide, though, and it’s interesting to see the same principles seem to hold up for human aesthetics in quite refined domains, and also for guiding the motivations of very simple AI systems in video game type worlds.

SM Dambrot: I’ve been wondering about materials and the structure of those materials. Do you think it’s important or even necessary in any way to have something that is patterned on our neocortical structure – neurons, axons, synapse, propagation – in order to really emulate our cognitive behavior, or not so relevant?

Dr. Goertzel: The first thing I would say is that in my own primary work right now with OpenCog, I’m not trying to emulate human cognition in any detail, so for what I’m trying to do – which is just to make a system that’s as smart as a human in vaguely the same sort of ways that humans are, and then ultimately capable of going beyond human intelligence –I’m almost sure that it’s not necessary to emulate the cognitive structure of human beings. Now, if you ask a different question – let’s say I really want to simulate Ben Goertzel and make a robot Ben Goertzel that really acts, thinks, and hopefully feels like the real Ben Goertzel – to do that is a different proposition and it’s less clear to me how far down one needs to go, in terms of emulating neural structure and dynamics.

In principle, of course, one could simulate all the molecules and atoms in my brain in some kind of computer, be it a classical or quantum computer – so you wouldn’t actually need to get wet and sticky. On the other hand, if you need to go to a really low level of detail, the simulation might be so consumptive of computing power, you might be better off getting wet and sticky with some type of nanobiotech. When you talk about mind uploading, I don’t think we know yet how micro or nano we need to get in order to really emulate the mind of a particular person – but I see that as a somewhat separate project from AGI, where we’re trying to emulate human-like human level intelligence that is not an upload of any particular person. Of course if you could upload a person, that would be one path to a human-level AGI … it’s just that it’s not the path I’m pursuing now, not because it’s uninteresting but I don’t know how to progress directly and rapidly on that right now.

I think I know how to build a human-level thinking machine…I could be wrong, but at least I have a detailed plan, and I think if you follow this plan for, let’s say, a decade, you’d get there. In the case of mind uploading, it seems there’s a large bottleneck of information capture – we don’t currently have the brain scanning methods capable of capturing the structure of an individual human brain with high spatial and temporal accuracy at the same time, and because of that we don’t have the data to experiment with. So if I were going to work on mind uploading, I’d start by trying to design better methods of scanning the brain – which is interesting but not what I’ve chosen to focus on.

SM Dambrot: Regarding uploading, then, how far down do you feel we might have to go? Is imaging a certain level of structure sufficient? Do we have to capture quantum spin states? I ask because Max More mentioned random quantum tunneling in his talk, suggesting that quantum events may be a factor in cryogenically-preserved neocortical tissue.

Dr. Goertzel: I’m almost certain that going down to the level of neurons, synapses and neurotransmitter concentrations will be enough to make a mind upload. When you look at what we know from neuroscience so far -- such as what sorts of neurons are activated during different sorts of memories, the impact that neurotransmitter levels have on thought, and the whole area of cognitive neuroscience -- I think there’s a pretty strong case that neurons and glia and the molecules intervening in interactions between these cells and other things on this level are good enough to emulate thought without having to go down to the level of quarks and gluons, or even (as Dr. Stuart Hameroff suggests) the level of the microtubular structure of the cell walls of the neuron. I wouldn’t say that I know that for certain, but it would be my guess.

From the perspective of cryogenic preservation, you might as well cover all bases and preserve something, so well that even if our current theories of neuroscience and physics turn out to be wrong, you can still revive the person. So from Max More’s perspective as CEO of Alcor, I think he’s right – you need to preserve as much as you can, so as not to make any assumptions that might prevent you from reviving someone.

SM Dambrot: Like capturing a photograph in RAW image format…

Dr. Goertzel: Yes – you want to save more pixels than you’ll ever need just in case. But from the viewpoint of guiding scientific research, I think it’s a fair assumption that the levels currently looked at in cognitive neuroscience are good enough.

This is the first part of a two-part article. The second part is available at http://phys.org/news/2011-06-dr-ben-goertzel-artificial-intelligence_1.html

Explore further: Study: Alcatraz inmates could have survived escape

4.6 /5 (27 votes)

Related Stories

WorldWide Telescope lights up with Kinect

Apr 19, 2011

A few weeks ago Microsoft Research held an event on Microsoft Campus called TechFest. We show a lot of new projects and prototypes from our labs but we keep a lot of stuff behind closed doors. There was one demo that blew ...

Modern society made up of all types

Nov 04, 2010

Modern society has an intense interest in classifying people into ‘types’, according to a University of Melbourne Cultural Historian, leading to potentially catastrophic life-changing outcomes for those typed – ...

Recommended for you

Study: Alcatraz inmates could have survived escape

20 hours ago

The three prisoners who escaped from Alcatraz in one of the most famous and elaborate prison breaks in U.S. history could have survived and made it to land, scientists concluded in a recent study.

User comments : 142

Adjust slider to filter visible comments by rank

Display comments: newest first

Eikka
1.4 / 5 (12) Jun 10, 2011
They're still talking of Intelligence as if it can be replicated by a machine that operates on formal rules.

What I want to know for sure, before calling the machine intelligent, is whether the human brain is fundamentally similiar to the kind of computational mechanism, or whether it employs some other mechanism which isn't computational.

For example, having a mechanism that relies on some sort truely random chaos effect to optimize answers isn't computationable - you can only approximate it and the more precisely you try, the more inefficient the AI becomes - and any computational attempt you may achieve just isn't the same thing.

If your brain is essentially a bag of a billion dice that you throw and see where the numbers fall, assuming that dice are truly random, trying to come up with a pseudo-random analog would not be intelligent in the same sense.
antialias_physorg
4.6 / 5 (11) Jun 10, 2011
They're still talking of Intelligence as if it can be replicated by a machine that operates on formal rules.

Well, the brain works on 'formal rules', too (Electrical/electrochemical ones).

The point whether the mechanism for intelligence is the same in computers or in brains is not really relvant. It's the effect (i.e. 'apparent intelligence') which is what counts.

For example, having a mechanism that relies on some sort truely random chaos effect to optimize answers isn't computationable

I think you are confusing computational and predictable. Building a good random number generator which doesn't rely on pseudo randopm number isn't hard (e.g. use the deacy of some radioactive isotope).
Eikka
1.5 / 5 (8) Jun 10, 2011
And here's why:

If you have a pseudo-random number generator, it works by taking some starting value, such as the number of seconds since 1.1.1970 etc. and can calculate a long list of numbers that have the characteristic distribution of random numbers.

The difference is that once the initial value is chosen, the list of numbers must follow. This creates a problem: a mechanism that is supposed to be random is now pre-determined. Every possible action the mechanism takes based on these numbers can be known beforehand by knowing the initial value.

So, our AI that uses pseudo-randomness is simply a machine that follows a pre-defined program that can be written down as a long list of IF x THEN y GOTO z.

And that is not intelligence. If it was, we'd have to argue that our television or the thermostat in the fridge is intelligent in the same sense as we are - just less so.
Eikka
1 / 5 (3) Jun 10, 2011

I think you are confusing computational and predictable. Building a good random number generator which doesn't rely on pseudo randopm number isn't hard (e.g. use the deacy of some radioactive isotope).


A random number generator that relies on radioactive isotopes is precisely what I want for the analog of a bag of dice.

It is not computational: it doesn't compute anything, it takes something which is (believed to be) truly random and measures it. Now the only question is, does the brain have to have a billion independent random number generators to work, or can it do with only few?
antialias_physorg
4.7 / 5 (3) Jun 10, 2011
So, our AI that uses pseudo-randomness is simply a machine that follows a pre-defined program that can be written down as a long list of IF x THEN y GOTO z.

Not entirely: Modern programs (and all serious AI implementations) work in parallel over several machines.

Unless you implement artificial time stamps / time keepers for all passed messagesthen you get 'real world' influences into the mix (e.g. variable lag between machines) which can quickly lead to a non-deterministic chain of events - even from a precomputable set of pseudo-random numbers.
Eikka
1.8 / 5 (5) Jun 10, 2011

The point whether the mechanism for intelligence is the same in computers or in brains is not really relvant. It's the effect (i.e. 'apparent intelligence') which is what counts.


As per Turing's argument, we cannot distinguish between a sufficiently complex machine that isn't intelligent, and a machine that is.

Apparent intelligence means nothing. The "easiest" way to meet the requirements is to simply throw so much computational power and data at it that you exhaust all the ways we can test the machine, and it's still just a mechanized puppet that says and does everything according to a list of instructions.
antialias_physorg
4.2 / 5 (5) Jun 10, 2011
Now the only question is, does the brain have to have a billion independent random number generators to work, or can it do with only few?

Mathematically the quality of a sequence of random numbers is no better (or worse) if you use one or many such generators.
Eikka
1 / 5 (2) Jun 10, 2011

Unless you implement artificial time stamps / time keepers for all passed messages


Which is exactly what you do - the computers have schedulers to keep them from crashing by getting into a data gridlock or race conditions etc. because they are not analog machines that can deal with division by zero or other fibs like that.
Eikka
1 / 5 (3) Jun 10, 2011
Mathematically the quality of a sequence of random numbers is no better (or worse) if you use one or many such generators.


If you get ten random values generated from a single measurement, then these ten values depend on the starting value. Thus they are linked - if you have one number here, then you must have a certain another number there.

In essence, the whole state of the "brain" is randomized from a single point, why not a single particle, which, if it can really work that way, presents interesting philosophical questions.
Eikka
2 / 5 (4) Jun 10, 2011
And the other problem of the single random number generator is the amount of information you can measure from it.

Let's say your random number generator outputs one truly random 32 bit number, because that's how much difference you can measure from your random particle. It means that your artifical brain can only have 4.3 billion different permutations of states it can exist in.
antialias_physorg
5 / 5 (1) Jun 10, 2011
Which is exactly what you do - the computers have schedulers to keep them from crashing by getting into a data gridlock or race conditions

Actually you don't HAVE to do that (I'm currently designing a software system for another company that works entirely asynchronously without the need of one type of component being aware of timing aspects of any other type of component.)

All you require is a good error checking / validation. But mostly the software doesn't care what happens inwhich order.

If you get ten random values generated from a single measurement, then these ten values depend on the starting value.
I meant with radioactive random number genrators. But even with pseudo random grenators: Knowing the seeds of 10 generators generating one number each is equivalent to knowing the seed of one generator and generating 10 numbers from it.

can only have 4.3 billion different permutations of states it can exist in.

Just genrate more numbers then.
nothingness
5 / 5 (4) Jun 10, 2011
why not a quantum random number generator?
LivaN
not rated yet Jun 10, 2011
For example, having a mechanism that relies on some sort truely random chaos effect to optimize answers isn't computationable


I don't understand.

You say, if mechanism A(human brain) relies on true randomness (TR) at some point, to generate output.
Then this entire process isn't able to be computationalised, because computation cannot generate TR.
But the fact that mechanism A has access to TR (whether it be quantim effects or something as yet discovered), means that there must be some mechanism that affects the physical world, enough so that mechanism A can interact with it, that generates TR. That we use or duplicate if possible.

Why compute randomness when you have it given that the human brain already interacts with a mechanic that gives TR?
El_Nose
not rated yet Jun 10, 2011
Wow you guys went off on some weird tangets...

But if you wanted to introduce true randomness into the system then you could very easily change what type of processor is used. Current CPU's use error detection and correction internally to fix random changes in voltage that occur -- but many FPCPU can be a lot more lienient in this regard. and this also means that they can be a lot faster than current CPU's. So the basic Idea is this -- the CPU every know and again might say 1 1=3 or 5 but this is not what the CPU should be processisng -- it should be linked like a neuron to sensors that use a classical design without fail --- kinda like the human brain can hallucinate, but that does not mean that the eyes are feed it the wrong info it means that the brain is interpreting it wrong.

I would love a grant to persue this sort of work.
George_Rodart
not rated yet Jun 10, 2011
Random numbers seem irrelevant here. Is our theoretical AI deterministic? Will it come to the same result repeatedly in some computational manner? Or will it produce statistical results like quantum phenomena? Even if you create a very smart AIG, one with all the right answers, will it be conscious, that is, self aware?
hush1
1.7 / 5 (6) Jun 10, 2011
Wow you guys went off on some weird tangets


Discourse for the sake of understanding? Evolving descriptions of Nature that meets the demands of science?

Well, here is one 'demand' from your former 20th century colleagues:

No prior geometry.
Deceptively simply. Diabolical to avoid.
The demand has yet to be met.
Find the math (language) that does this.
In an understandable expression that means:
The language used is an exact language.

And that is just one of the demands your former colleagues demanded.
Once that demand is met, you may continue onto the next step:

"for exact understanding exact language is necessary."

(Gurdjieff to Ouspensky)

When using exact language you begin to understand.

Make sure before you comment (on this comment) that the demand:
'No prior geometry' is understood exactly.

This demand led the list of all demands placed on 20th century physics and mathematics.

Isaacsname
1 / 5 (1) Jun 10, 2011
Just what I was thinking about. Sort of. What would have happened in the universe if conscious entities never came into existence ? Would things have progressed, evolved, only to a certain point ? Would evolution have come to a halt ? It seems that the physical evolution of things in the universe could only go so far without conscious life around to manipulate environments in ways nature never intended. Like life itself was necessary to overcome a dead end. I feel almost like we have been thrust into " hyper-evolution " by the advent of consciousness. Are we outpacing our biological ability to evolve with an environment that changes at a far faster pace than what natural evolution normally happens at? Why do we have brains that can learn complex math and physics in the first 3 years of life, but yet we have to go to school to learn maths ? Why is the "I", the conscious idiot, at the forefront of perception? Because a computer cannot have unorthodox thoughts ?
Isaacsname
1 / 5 (1) Jun 10, 2011
Can AI researchers program a computer to fool itself ? Is that a humans-only ability ? I read that an average human brain could be compared to a 160,000 megahertz processor, but yet,consciously, we are thinking very slowly, we use the language we are familiar with speaking outloud, to talk to ourselves in our minds( self-discussion ) or have thoughts. In that respect, we lose to computers by a longshot. I love that the human brain is actually shrinking as we " evolve ", the Corpus Callosum of a Macaque allows communication between hemispheres twice as fast as a human's, but yet we see ourselves as " dominant " in many ways over monkeys. I'd wage that between the de-evolution of the human brain, and the exponential growth predicted by Moore's Law, we should be close to " real " AI sometime soon.

I look forward to the 2nd part of this interview.
hush1
1.6 / 5 (7) Jun 10, 2011
Why do we have brains that can learn complex math and physics in the first 3 years of life, but yet we have to go to school to learn maths ?


Because you can not recognize the language called 'learning'. You have no adequate description for your first breath. No matter how much 'importance' a learning event carries.

Are we outpacing our biological ability to evolve with an environment that changes at a far faster pace than what natural evolution normally happens at?


The answer is yes. And you are NOT going to 'know' when AI takes it's first 'breath'. You are NOT going to 'know' when the 'intelligence' you believe in, is surpassed.
You WILL ask very, very stupid things like:
"Does life exist without human knowledge?"
"Are we alone?"

There is NO life form that will give you a 'reason' to contact a higher intelligence, whether AI or not.

Alright, I see, you don't understand. Give me at least ONE reason I can not REFUTE, as to why I need to 'contact' you!

Good Luck!


ngrailrei
2.3 / 5 (3) Jun 10, 2011
My recent book Deus ex Machina sapiens (available on Amazon) takes issue with the notion that mind or intelligence can be designed, though it can be taught even as it is developing (therefore the work of designers such as Dr. Goertzel in including ethical considerations in their designs, or at let theri design philosophies, is good.) My book argues that intelligence/mind/consciousness have never been and cannot be designed, they can only emerge through evolutionary processes.

Please allow the plug, since it is critically relevant to the discussion.
Isaacsname
not rated yet Jun 10, 2011
" There is NO life form that will give you a 'reason' to contact a higher intelligence, whether AI or not.

Alright, I see, you don't understand. Give me at least ONE reason I can not REFUTE, as to why I need to 'contact' you!

Good Luck! "

I agree, we can't concretely say exactly when , when we can't settle on a definition of what "it" is . As far as the purpose of trying to contact a perceived " higher " lifeform, I would just think it's for the purpose of sharing information. I believe ultimately that we exist only for the self-preservation of information, but only information that serves the greater good of our species. When we leave the physical existence, the only things we leave behind are our bodies and the information that passed through us, ironic that information has no tangible physical characteristics, yet is the only thing that is left behind in the physical universe,sorry a tangent.But how do you give AI a sense of morals, or altruism ?
hush1
1.8 / 5 (5) Jun 10, 2011
Zero imagination is needed to assume Nature has processes.

For those hard liners: "Assume nothing", just comfort them with ONE assumption: Among all of Natures' processes, at least one process will be a process we label evolution.

You don't need design. You DO need Nature.
That is all you need. Nature is your teacher.

For ALL those with less than critical thinking facilities, your book will provide the necessary distraction to protect the remaining 1% with critical thinking abilities to go about meaningful research, no matter what branch of science. Thank you for sacrificing yourself to the 'lions' of ignorance, so that we may process with real science.

There are no ethnics to consider in AI. It is great if you want publicity and want to sell books.

There is no method to recognize the 'birth' of AI.
There is no method to recognize 'higher intelligence'.
(You have to have that 'intelligence' to recognize it.)

hush1
2 / 5 (4) Jun 10, 2011
Again, for the 99%, I actually encourage you to ask yourselves, over and over again:
"Does life exist without human knowledge?"
"Are we alone?"

The way, we, the remaining 1%, will actually have the opportunity to do serious research, science, and who knows? Share that progress with you one day.
hush1
2 / 5 (4) Jun 10, 2011
it's for the purpose of sharing information


Wow! And Nature is the worst of all possibilities to share information. Just Wow.

I believe ultimately that we exist only for the self-preservation of information, but only information that serves the greater good of our species


Me, myself, and I. Yep. No room for Nature here. Got to preserve that information, don't ja know? Got my priorities.

But how do you give AI a sense of morals, or altruism ?


Too late. AI exists. So does Life. Without your knowledge.
We will let you answer all your questions. You don't need us. When you have no more questions. Then you are us.

Eikka
2 / 5 (3) Jun 10, 2011
Why compute randomness when you have it given that the human brain already interacts with a mechanic that gives TR?


It is a question of structure.

If you have to have billions and billions of independent random number generators to get true intelligence, as we assume the human brain to possess, then the truly intelligent machine must also be an analog of this structure.

Generating these independent random numbers and then distributing them through a network of completely deterministic processors is simply inefficient. The machine that behaves like a person might be the size of a city and still not be able to think half as fast as we do.

Knowing the seeds of 10 generators generating one number each is equivalent to knowing the seed of one generator and generating 10 numbers from it.


Yet this is not the same, because those ten generators are independent. At any given time, values A and B are not linked by value C.
Eikka
1.7 / 5 (3) Jun 10, 2011
In essence, the question of linked or independent random values is this: is it possible that a single fundamental particle or some similiar entity could be responsible for the whole behaviour of an intelligent entity of arbitrary size and composition.

(Well, not the -whole- as there must be a mechanical framework that "filters" this randomness to produce the behaviour, like Brownian motion in water, but you get the point)

Let's say it's a single hydrogen atom. Again assuming that intelligence works through a truly random mechanism and I'm not simply mistaken. A single hydrogen atom would be the equivalent of me, and why not you, and everybody else in the world because given enough readings it would provide enough random values to drive the entire humanity, albeit making those readings would take significantly more time and energy than simply having a hundred billion neurons doing the same thing in parallel.
Eikka
2.3 / 5 (3) Jun 10, 2011

can only have 4.3 billion different permutations of states it can exist in.

Just genrate more numbers then.


This is inefficient. What is the point of an artifical intelligence when it may require a hundred billion steps to do the same as the brain does in one step in parallel?

Actually you don't HAVE to do that (I'm currently designing a software system for another company that works entirely asynchronously


Good luck with that. Most AI researchers don't even seem to try, instead arguing that you can simply deterministically compute the entire thing. I want to know if that's true


All you require is a good error checking / validation. But mostly the software doesn't care what happens inwhich order.


If we're talking about the brain, there is no error checking or fallbacks to known working states. Everything just happens and the brain has to deal with it. Errors are a fundamental part of th
blawo
3.7 / 5 (3) Jun 10, 2011
Briliant draft for a new Monty Python sketch.
Vendicar_Decarian
2.7 / 5 (6) Jun 10, 2011
"They're still talking of Intelligence as if it can be replicated by a machine that operates on formal rules." - Eikka

While logical formalism has utterly failed in AI, hope springs eternal.

The fact is, that with considerably increased computation, and considerably slower speed, software can simulate any analog system to any desired accuracy.

Hence it is possible for software to produce a mind.

The key stumbling block IMO has historically been glacially slow hardware speeds compared to the brain, and a belief that language was the method by which the brain learns about the outside world.

The brain has evolved to model the world with all manner of senses, and their reflection in the mind.

Computers still have no visceral "comprehension" of weight, distance, color, separation, speed, taste, brightness, etc... And until one does, they will have no understanding of such things, and hence no intelligent way of relating to the world around them.
blawo
1 / 5 (2) Jun 10, 2011
Thanks God the quantum information revolution has started. Good chance we can rid of this materialistic crap very soon!
Vendicar_Decarian
2 / 5 (4) Jun 10, 2011
"Good luck with that. Most AI researchers don't even seem to try, instead arguing that you can simply deterministically compute the entire thing." - Eikka

Since you can simulate to arbitrary precision any analog system via digital computations, it follows that any digital or analog system - even the brain - can be simulated to any desired accuracy.

Currently it would take roughly the total combined computing power of all of the worlds hardware to simulate one human brain, but that is mostly because the hardware isn't optimized for the task.

If you simulate a human brain you are computing a mind.

Vendicar_Decarian
1.8 / 5 (5) Jun 10, 2011
"Good chance we can rid of this materialistic crap very soon!" - blaw

Nonsense. Quantum computers will never be general purpose, and mind isn't a quantum state.
blawo
2 / 5 (4) Jun 10, 2011
"Good chance we can rid of this materialistic crap very soon!" - blaw

Nonsense. Quantum computers will never be general purpose, and mind isn't a quantum state.


Mind is JUST that. Quantum state.

Vendicar_Decarian
2.3 / 5 (4) Jun 10, 2011
"If you have to have billions and billions of independent random number generators to get true intelligence, as we assume the human brain to possess, then the truly intelligent machine must also be an analog of this structure. " - Eikka

I would be surprised if randomness doesn't play a big part in the consideration of similar alternatives by the brain. But that randomness can come from the decay of previous states of mind and previous states of external stimulus as they impact upon that mind.

While it is true that the inability to perfectly compute the response of a neuron will result in the imperfect emulation of a real mind, this does not imply that a mind is not being computed that is simply different to the one that is claimed to be simulated.

Thought is not a result of round off error, and while inspiration might in part be, although I suspect that internal and external sources of bias are more important, round off error can be set to any desired level.
Vendicar_Decarian
1 / 5 (4) Jun 10, 2011
"Mind is JUST that. Quantum state." - blao

You have as much justification for making that claim as claiming that the mind is a block of cheese.

The mind does however consist of a superposition of states. This fact is particularly evident when one considers memory.
Vendicar_Decarian
2.3 / 5 (4) Jun 10, 2011
"My book argues that intelligence/mind/consciousness have never been and cannot be designed, they can only emerge through evolutionary processes." - ngrail

Yes, it is hard to design something that you don't understand.

Still, I see no reason to presume that once the first programmable mind has been created, that components of that mind will be hand optimized or enhanced to produce a mind that functions more efficiently or more powerfully.

Ultimately this optimization will be performed by the mind itself, or another artificial mind working on it's behalf.
Vendicar_Decarian
2 / 5 (4) Jun 10, 2011
"The difference is that once the initial value is chosen, the list of numbers must follow. This creates a problem: a mechanism that is supposed to be random is now pre-determined." - Eikka

Well, then we can randomize the process every microsecond by the intensity of the sound picked up by some external sensor xored with some free running clock.

Vendicar_Decarian
2.8 / 5 (6) Jun 10, 2011
"... and it's still just a mechanized puppet that says and does everything according to a list of instructions." - Eikka

The tests available are not limited to conversations of course. The mind may be asked to compose a term paper, or find a cure for cancer, compose a sonnet, speculate on the nature of existence, use that speculation to formulate a new physics, etc.

If a puppet can do such things, then that puppet is intelligent.
Vendicar_Decarian
2 / 5 (4) Jun 10, 2011
"Let's say your random number generator outputs one truly random 32 bit number, because that's how much difference you can measure from your random particle. It means that your artifical brain can only have 4.3 billion different permutations of states it can exist in." - Eikka

No. Of course that is false. It's internal state will also be a function of it's past state - it will have a memory won't it? - and the state induced by the questions or statements put to it.

In addition it will be presumably connected to external devices like cameras, robotic arms, sound sensors, and the like. All of these things will be streaming data into the computed mind - altering it's state.

Your view wouldn't even be correct if the "mind" connected to the world only via a teletype.
hush1
1.7 / 5 (6) Jun 10, 2011
that language was the method by which the brain learns about the outside world.

The human senses, all sensory perception signals send to the brain is language. And yes, I know the electrical signals potentials send to the brain are fundamentally different from the physical sources causing the sensory perception signals in the first place.

And the sum of all electric potentials of/in your brain is not a single wave? Fourier says yes. So do I.
Fourier Language. So electric potentials is not your language? Your brain is less selective than you are about what goes into your head. Who's in charge here?

Anyway, 'language' IS the method by which the brain learns about the world. About how YOU forgot everything your brain does for you. Even the learning is what you forgot. See what the result is - one 'victim' frustrated plea:

Why do we have brains that can learn complex math and physics in the first 3 years of life, but yet we have to go to school to learn maths ?
lol

blawo
1 / 5 (4) Jun 10, 2011
You have as much justification for making that claim as claiming that the mind is a block of cheese.


The cheese does not necessarily include states which cannot be translated into language. Quantum state - per definition - has this inability. While quantum state can be described in language, it cannot be articulated. Which is just - this is the *just* you got in my previous post - which is just precisely our basic problem with conscious phenomena. Inability of translation into words. Why for God sake looking for complicated and never sufficient answers, when nature is that simple? Namely, consciousness is the quantum part of the mind, the part, which cannot be translated into the language - because no quantum state in general can be.
Vendicar_Decarian
1.3 / 5 (4) Jun 10, 2011
The cheese does not necessarily include states which cannot be translated into language." - Blawo

All states can be translated into language. Information is infinitely transmutable. What are you trying to say?

"Quantum state - per definition - has this inability." - Blawo

That is supposition on your part.

"Namely, consciousness is the quantum part of the mind, the part, which cannot be translated into the language - because no quantum state in general can be." - Blawo

Yup, consciousness is the cheese part of the mind.

You write words, and I suppose they have a meaning to you, but that is about where it ends.

Vendicar_Decarian
2.6 / 5 (5) Jun 10, 2011
"The human senses, all sensory perception signals send to the brain is language." - hush

No. It is data.

You might claim that the protocol with which the sensory cells transmit to the brain is a language, but the information itself clearly isn't.

The pressure of the air against my skin is not language.

"And the sum of all electric potentials of/in your brain is not a single wave? Fourier says yes." - Hush1

Well, no he doesn't. He would say that over any time interval, the signals can be reconstructed as an infinite sequence of sign and cosine waves. But then the same data can be reconstructed as a series of pulses, or a host of other ways all equally valid.

"So electric potentials is not your language?" - Hush1

Nope.

"Anyway, 'language' IS the method by which the brain learns about the world." - Hush1

Language is not a method. Language is a sequence of symbols that once interpreted provides meaning. The processing of language does however require a method.

cont...
Vendicar_Decarian
2 / 5 (4) Jun 10, 2011
You are chronically confusing information with information carrier, and information with information analysis.

And that is why your comments lead nowhere.
ngrailrei
2.3 / 5 (3) Jun 10, 2011
"My book argues that intelligence/mind/consciousness have never been and cannot be designed, they can only emerge through evolutionary processes." - ngrail

Yes, it is hard to design something that you don't understand.

On the contrary. We do that all the time; at least, we design many things with only a minimal understanding of how they work. But you are in any case ignoring or misunderstanding my point, which is that there are good reasons to believe (and not merely to presume) that mind cannot be programmed, period. To explain that has taken a whole book, so please dont expect me to explain it here.
http://www.amazon...p;sr=8-1
blawo
2.3 / 5 (3) Jun 10, 2011
All states can be translated into language. Information is infinitely transmutable. What are you trying to say?


Tell this to the quantum cryptography people. That you can transmute quantum encrypted photons to classical information and vice versa :) Sorry my fellow, YOU are the guy who write words...

Quantum theory is solid scientific discipline. Terms like "quantum information" and "classical information" have both well defined, physical MEANING, as has the experimentally verified inability of expressing quantum information in classical bits.

Ignoring the truth about physical universe around you is your true right, but then you cannot hope be any longer part of the frontline...
hush1
2.3 / 5 (6) Jun 10, 2011
As far as information is concerned, a machine can be defined by the way it distributes energy introduced to it.

Well, no he doesn't.

Well, yes he does.

The SIGNALS (DATA) (PULSES) or ALL 'HOSTS' OF OTHER WAYS is nothing more than a distribution of energy. A distribution of energy is information. And information can have an infinite form of carriers - your signals, pulses, or your 'host' of other ways.

You are chronically confusing signals, data, pulses or all host of other ways to something which has no common denominator. That is simply false. The underlying denominator to all the words you are using is energy. And the ONLY reason you have different words for the SAME thing is because you are confusing the METHOD OR FORM USED IN THE DISTRIBUTION of energy with energy itself.

Sound, pure sinusoidal tone aside, can be reconstructed as an infinite sequence of sine and cosine WAVES. WE CALL THAT "TALK". Go figure.

That is why your none of your comments ever make sense.


hush1
1.8 / 5 (5) Jun 11, 2011
Taking this further.
As far as information is concerned,you can DEFINE LITERALLY EVERYTHING by the way EVERYTHING AND ANYTHING distributes energy introduced to ANYTHING and EVERYTHING.

Is it data? Well that's because that is the way data distributes energy.
Is it language? Well, that's because that is the way language distributes energy.
Is it a dense VD? Well, that's because that is the way VD distributes energy.

And as far as information is concerned, meaning is irrelevant, when the distribution of energy takes place in a form called dense VD.
hush1
1.8 / 5 (5) Jun 11, 2011
Neither terms like "quantum information" and "classical information" have both well defined, physical MEANING.
They have an INTERPRETATION to physical MEANING.

The air of reasoning is thick with the antics of semantics.
Vendicar_Decarian
2.2 / 5 (6) Jun 11, 2011
"...there are good reasons to believe (and not merely to presume) that mind cannot be programmed, period." - Nograil

The existence of public schools, high schools and universities would seem to contradict your assertion.
Vendicar_Decarian
2 / 5 (4) Jun 11, 2011
"Neither terms like "quantum information" and "classical information" have both well defined, physical MEANING." - Hush1

Your sentence contains a contradiction in state and is therefor incomprehensible.
Vendicar_Decarian
2 / 5 (4) Jun 11, 2011
"As far as information is concerned,you can DEFINE LITERALLY EVERYTHING by the way EVERYTHING AND ANYTHING distributes energy introduced to ANYTHING and EVERYTHING." - Hush1

Please define a banana in such a manner.

"Is it data? Well that's because that is the way data distributes energy." - Hush1

Is what data? What is because of what?

I could go on but I don't think it would improve the comprehensibility of you comment.
Vendicar_Decarian
1.3 / 5 (3) Jun 11, 2011
"As far as information is concerned, a machine can be defined by the way it distributes energy introduced to it." - Hush1

Now you are confusing information with machinery.

Vendicar_Decarian
1 / 5 (3) Jun 11, 2011
"That you can transmute quantum encrypted photons to classical information and vice versa :)" - blawo

Now you are confusing a system of photons (media) with information (content).

You really don't seem to know what information is.

I wouldn't try to argue about the encryption of information until you do.

"both well defined, physical MEANING, as has the experimentally verified inability of expressing quantum information in classical bits." - blawo

It is simply supposition on your part to claim that Quantum Theory has any "physical meaning", just as it was supposition to claim that the theory of crystal spheres contained within crystal spheres had any physical meaning.

It probably is true that the content of a cubit can not be expressed as a finite sequence of bits.

But then neither can the value of Pi or any irrational nuumber.

Rational numbers can be of course. Including all rational fractions.

hush1
2 / 5 (6) Jun 11, 2011
"Now you are confusing information with machinery."

"All states can be translated into language. Information is infinitely transmutable. What are you trying to say?"
VD

Now you are confusing translation with transmutability.
Confusing meaning with form. And continue to make no sense.

So as far as a language is concerned, a machine can be defined by the way it transmutes information.

You really have no idea how that sounds, do you?
Hint: You really don't want anyone to tell you.
Vendicar_Decarian
2 / 5 (4) Jun 11, 2011
"Now you are confusing translation with transmutability." - Hush1

No. In the first instance the word "translation" is used to mean to convert from an internal machine state into a series of abstract symbols that encrypts that state in a manner that can be used to reconfigure the original machine state.

The second word "transmuteable" or it's base "transmute" is used to state that once translated into information the form of that information may be altered without altering it's content.

The first is a conversion between physical state to a logical state. The second is a conversion between logical states.

"As far as information is concerned, a machine can be defined by the way it distributes energy introduced to it." - Hush1

Again you are confusing information with machinery.
hush1
2 / 5 (4) Jun 11, 2011
"No. In the first instance the word "translation" is used to mean to convert from an internal machine state into a series of abstract symbols that encrypts that state in a manner that can be used to reconfigure the original machine state."

No one knows your usage or "means to convert" the original definitions of words, especially the word "translate".

Stick to original definitions and you will be just fine.

"the form of that information" is energy. Which you refuse to recognize. Which hampers your understanding of information and what information means.

Again you are confusing translation with "transmuteable" and additionally, now, with alteration.

"the form of that information may be altered without altering it's content." Which is not possible.

The "translation" you want to used maintains a one to one correspondence of elements of one set to another set. Called mapping.

hush1
2.3 / 5 (3) Jun 11, 2011
"The first is a conversion between physical state to a logical state."

The territory (physical state) is not the map (logical state).
unknownorgin
1 / 5 (1) Jun 11, 2011
I read an artical about scanning a monkey brain while the monkey was looking at an object and they were suprised to see a 3 dimentional image of the object in the monkey brain. As far as I know all of our digital circutry is 2 dimentional like a sheet of paper. 3 dimentional circutry would have an advantage because data is acessable any point to any point and objects seen can be examined in a tactile real world manner just like humans and animals must do.
hush1
1 / 5 (1) Jun 11, 2011
This brings this article to mind with your comment:
http://www.physor...ule.html
Vendicar_Decarian
2 / 5 (4) Jun 11, 2011
"No one knows your usage or "means to convert" the original definitions of words, especially the word "translate"." - Hush1

I think everyone knows the meaning. Although you may be an exception.

"the form of that information" is energy. Which you refuse to recognize." - Hush1

No, that isn't clear either. Certainly it is a lack of entropy and as such that state probably required energy in it's creation, but in itself it is not energy. The physical media is however.

You wouldn't again be confusing information with the media that carries it would you?

"Again you are confusing translation with "transmuteable" and additionally, now, with alteration." - Hush1

Actually I just finished explaining it to you. You should read the explanation again and again until you understand it.

Vendicar_Decarian
2.3 / 5 (4) Jun 11, 2011
"The "translation" you want to used maintains a one to one correspondence of elements of one set to another set. Called mapping." - Hush1

Yes, that is nice of you to notice. But the problem with using sets of course is that you require a set of all possible knowledge to be your universal set, and since it, itself represents new knowledge would need to contain itself as an element.

So I think it is best to avoid set theory and simply not worry about childish absolutism in a topic as mundane as AI.

Vendicar_Decarian
1 / 5 (3) Jun 11, 2011
"This brings this article to mind with your comment:" - Hush1

Sorry, but your reference has no applicability to this discussion.
Ethelred
1 / 5 (1) Jun 11, 2011
Mathematically the quality of a sequence of random numbers is no better (or worse) if you use one or many such generators.
Not true. IF you do it properly a set of generators can generate a number with more precision. If you do it wrong the precision remains the same. Also as you pointed out if you use RG that are interactive but running on different clocks you should should reach a level of true unpredictability.

Perhaps I shouldn't have given a one on that. Sorry.

I meant with radioactive random number generators.
Those don't work the same as pseudo random since you have to wait for them. They are time sensitive.

Ethelred
Vendicar_Decarian
1.3 / 5 (3) Jun 11, 2011
"I read an artical about scanning a monkey brain while the monkey was looking at an object and they were suprised to see a 3 dimentional image of the object in the monkey brain. " - Unknown

The article you read was wrong.

The retina, much like the skin has it's signals mapped onto the outer layers of the brain in a nearly 1 to 1 relationship. Hence what is mapped onto the brain is a 2d representation of what is seen.

Ethelred
1 / 5 (1) Jun 11, 2011
Eikka
Which is exactly what you do
Not if you want real unpredictability. If that is what you want you need to have some variance in the timing of different systems. Clearly wait states would be needed to avoid jams.

In essence, the whole state of the "brain" is randomized from a single point,
This is going down a path that has nothing to do with AI. Randomness is a only a tiny part of what could be needed. Fuzzy numbers is much important IF you want to match humans. If you don't want to math humans then I don't think randomness is needed except occasionally.

Ethelred
Ethelred
3 / 5 (2) Jun 11, 2011
as we assume the human brain to possess,
You are assumming this. I see no need excpept to avoid predicitability. Which is needed for competition not for analysis.

Again assuming that intelligence works through a truly random mechanism and I'm not simply mistaken.
I am pretty sure you are at least partly mistaken. Some of human inteligence must be deterministic. Some is fuzzy but fuzzy is not the same as random.

In any case full emulation of humans is not what AI general or otherwise is about.

Most AI researchers don't even seem to try, instead arguing that you can simply deterministically compute the entire thing. I want to know if that's true
Well back to humans. We are NOT deterministic and the parts are not synchronized. Well I am pretty sure on that.>>
hush1
1 / 5 (4) Jun 11, 2011
The physical media is however.

So in your erroneous world of definitions:

The physical media is energy.

At this point it is best for you to avoid the subject of information and possibly this thread. No one can stop you from from embarrassing yourself with your pseudo definitions and language though, so follow your ignorance.
Ethelred
2.7 / 5 (3) Jun 11, 2011
My thinking on self awarness, which may not be needed for AI but is for Human inteligence, is that the parts of the brain watch each other. Not all parts watch all parts but some certainly do watch other parts. I can think about what I am thinking about on verbal and non verbal levels at the same time. It suspect that cannot be emulated by a Turing Machine. Only by machines that are NOT synched. Synched machines are ALL Turing machines except for parts that are truly random and only some of what is going on in brains of any kind is truly random.

Ethelred
Vendicar_Decarian
1 / 5 (4) Jun 11, 2011
"The physical media is energy." - Ethel

But the patterns imposed on that media that record the encryption are not.

Sadly, you remain completely clueless.
Vendicar_Decarian
1.8 / 5 (5) Jun 11, 2011
"It suspect that cannot be emulated by a Turing Machine." - Ethel

The OS in the computer I am now using watches over the memory usage of the applications that are running. It watches over the state of the hard drives, and watches over access to various directories and file types to prevent viral infection. Etc. Etc. Etc.

In comparison your self awareness is minimal.

What is lacking are the higher order concepts.
hush1
3.7 / 5 (3) Jun 11, 2011
Why are you contributing your own words (quotes) to someone else? And then answering and contradicting your own statements.

"The physical media is energy." - Ethel

But the patterns imposed on that media that record the encryption are not.

Sadly, you remain completely clueless.
hush1
1 / 5 (3) Jun 11, 2011
I think everyone knows the meaning. Although you may be an exception.


"I think everyone knows..." What do these four words mean?

I think everyone knows the meaning. Although you may be an exception.

"the form of that information" is energy. Which you refuse to recognize." - Hush1

No, that isn't clear either. Certainly it is a lack of entropy and as such that state probably required energy in it's creation, but in itself it is not energy. The physical media is however.


Clear is your confusion.

Isaacsname
1 / 5 (1) Jun 11, 2011
"Mind is JUST that. Quantum state." - blao

You have as much justification for making that claim as claiming that the mind is a block of cheese.

The mind does however consist of a superposition of states. This fact is particularly evident when one considers memory.


Yes ! Exactly. A superposition of states, in constant flux . Never static and always a superposition of fairly precise aproximations. But why is the " I ", the real dummy in the brain, in the driver's seat of the body, or is that an illusion as well ?
hush1
2.3 / 5 (3) Jun 11, 2011
@Isaac

Your body, as well as all things physical, can be cloned.

Actually, a branch of physics, particle physics, prides itself in having elementary particles that are identical to each other.

There is a exception to elementary particles being identical to each other. Being identical depends solely on if you can view the particles as 'isolated' from the 'surroundings'

Returning to your body. If I replace your body with a clone of your body, the 'You', you call 'You', will always lack the information to know the difference between the body I took from you, and the body clone replacement.
(Which only means 'You' are the 'driver' - your seat or body makes no difference)

That is like giving software the ability to distinguish between an exchange of identical hardware.

In particle physics that can be done.
It remains to be seen if AI follows a similar strategy as used in particle physics.
Vendicar_Decarian
1 / 5 (3) Jun 11, 2011
No, that isn't clear either. Certainly it is a lack of entropy and as such that state probably required energy in it's creation, but in itself it is not energy. The physical media is however.

"Clear is your confusion." - hush1

And again you seem incapable of comprehending the difference between media and information encoded onto that media.

Space itself makes a fine media for encrypting information. Define a 0 as the absence of matter/energy over a separation of 1 unit, and a 1 as the absence of matter/energy over a separation of 2 units.

Now what were you saying about information being energy?
Isaacsname
not rated yet Jun 11, 2011
@Isaac

Your body, as well as all things physical, can be cloned.

Actually, a branch of physics, particle physics, prides itself in having elementary particles that are identical to each other.

There is a exception to elementary particles being identical to each other. Being identical depends solely on if you can view the particles as 'isolated' from the 'surroundings'

Returning to your body. If I replace your body with a clone of your body, the 'You', you call 'You', will always lack the information to know the difference between the body I took from you, and the body clone replacement.
(Which only means 'You' are the 'driver' - your seat or body makes no difference)

That is like giving software the ability to distinguish between an exchange of identical hardware.

In particle physics that can be done.
It remains to be seen if AI follows a similar strategy as used in particle physics.


..lost me there......what's a particle ?
hush1
1 / 5 (1) Jun 11, 2011
..lost me there......what's a particle


A mathematical expression. That is consistence with what is measured.
Vendicar_Decarian
1.3 / 5 (3) Jun 12, 2011
"A mathematical expression." - Hush1

So then you are a collection of mathematical expressions.

Consistent forms of thought aren't your strong point are they?
hush1
2.3 / 5 (3) Jun 12, 2011
Space itself makes a fine media for encrypting information. VD


Define 'Space.' Define 'fine media'. Define 'encryption'.

Space makes a fine media. How?

Define a 0 as the absence of matter/energy over a separation of 1 unit, and a 1 as the absence of matter/energy over a separation of 2 units.


There is no need to define: what is a number? what is a zero?
If I define zero, then there is no way of showing that this definition is only for zero, whatever zero is.

Actually the quote, your statement, makes no sense to me. I don't know who can understand you. If no one comes to your aid to explain what you stated, we will all be at a loss to understand what you said.

Information is the distribution of energy without a unit of measure. The distribution occurs literally everywhere, and everywhere has no unit of measure.

What is "the absence of matter/energy"? Does "/" represent division? A ratio? A constant?

We can change the language if this helps.
Are you multilingual
hush1
1 / 5 (2) Jun 12, 2011
You are not logical. You comments reflect no interest to understand anyone. You talk with words you don't define.

Why are you answering for Isaac?
hush1
1 / 5 (2) Jun 12, 2011
"A mathematical expression." - Hush1

So then you are a collection of mathematical expressions.

Consistent forms of thought aren't your strong point are they?


"So then you are a collection of mathematical expressions."
A conclusion without a premise.
State your premise.

"Consistent forms of thought aren't your strong point are they?"
Another conclusion without a premise.

This is why consistency and any forms of thought are not strong points for you.

You want to be argumentative. Only for the sake of argument. And forsake understanding, insight, clarity, reason, logic, science, curiosity, inquiry, discourse, dialogue, and self.
And you pay the price. And you remain ignorant. And continue to make no sense.


Ethelred
3.5 / 5 (4) Jun 12, 2011
Deesky

If you don't understand the conversation don't rank people.

If you DO understand then why aren't in the conversation?

Now if this was brain dead blathering from Marjon I can understand the ones. However this has been an attempt at communication however poorly some have understood it. You included or you would not have ranked me a one on that post.

You too should read the Emperor's New Mind by Penrose. Warning. The book will make your brain hurt if you actually try to follow it. Unless you are a genius. A high genius and a Turing machine fan.

A Turing machine, which in the original thought experiment version was a serial device with paper tape, can emulate any computer presently in commercial use. Only very slowly. Thus a Turing machine is useful for looking at how computing is really done. Any machine that can be emulated by a Turing Machine IS a Turing Machine.

Ethelred
Ethelred
3.7 / 5 (3) Jun 12, 2011
Vendicar
"The physical media is energy." - Ethel
I didn't say.

The OS in the computer I am now using watches over the memory usage of the applications that are running.
Yes. It is a Turing machine.

In comparison your self awareness is minimal.
You don't understand what I am getting at. Try the Emperor's New Mind by Roger Penrose. Turing machines are limited by Godel's Proof. There are things that Turing Machines simply cannot do due the limits on purely logical thinking. Penrose thinks we beat these limits by using quantum methods. I was thinking there might be other ways that can be emulated or perhaps forced by making the making asynchronous to get them out the Turing limits.>>
Ethelred
3 / 5 (2) Jun 12, 2011
What is lacking are the higher order concepts.
I think the wait states involved in an asynchronous machine may help produce that by spending the time to analyze the processes in the other machines. The CPU in your PC is all one machine. None of it is aware of what the other parts are thinking about the parts are merely engaged in control not analysis.

Now what were you saying about information being energy?
He might have been thinking of Maxwell's Demon vs. Information theory. There has been a lot of thinking going in dealing with information vs. energy due to that idea.

Either that or he using energy as a matter/energy to save characters. Or confused. Never quite sure with hush if he saying something significant or trivial in regards to language. That often depends on your point of view and stuff that seems profound to him is often stuff I thought of long ago and has begun seem basic to me.>>
Pyle
3 / 5 (2) Jun 12, 2011
Wow! This is a fun one. I am so sorry I missed out on the early goings. I wish I had a sliver of the intelligence of Dr. Goertzel. (pun...)

Anyway, VD and hush puppy, you guys are great. Your disagreements are all semantics. While this is everything in debate, I think it is very little in reality. I think hush says it best just now:
You want to be argumentative. Only for the sake of argument.

Try some introspection though and see that you are there too.

We can all ask Maxwell's daemon if information = energy. Until the experiment is designed and carried out we can all agree to disagree I think.

Anyway, maybe Dr. G will answer a question about self awareness. I am pretty sure it is required, and is already quite prevalent as VD said quite well, only the higher orders are lacking.

As for randomness, why? So that an AGI can be wrong? The universe introduces enough randomness without designing in errors in thinking. Silly Eikka.
Pyle
3 / 5 (2) Jun 12, 2011
Ethelred I hate you. I spend all this time writing my worthless comment so that you can place yours, with most of my points just above mine. thppppt!

Regarding your counter to VD's awareness point, I humbly disagree. I think VD was on the right track. What is lacking is a general awareness of the computer's relative position in the environment. Ultimately we'll add that and then push into richer and richer environments. AGI environments will be vastly richer than our current human state in short order after their creation.

And of course I don't hate you Eth. Actually quite sorry I interrupted your posts.
Ethelred
1 / 5 (1) Jun 12, 2011
So then you are a collection of mathematical expressions.
I suspect the entire MultiVerse is exactly that. Including iterative functions.

Consistent forms of thought aren't your strong point are they?
English is not his native language and often is thinking and translating. Stuff gets lost.

Ethelred
hush1
2.3 / 5 (3) Jun 12, 2011
Any machine that can be emulated by a Turing Machine IS a Turing Machine.


So someone point out to the inadmissibility of my former statement:
As far as information is concerned, a machine, (Turing Machine included), can be defined by the way it distributes energy introduced to it.

VD asserts and alleges:
"Now you are confusing information with machinery."

I gave contra to that assertion:
Information is the distribution of energy.

The distribution (of energy) defines the physical.
Yes, a banana is information. Because the distribution of energy defines the information we call banana.

To account for 'randomness' or unpredictability I resorted to the concept from the quantum states of the electron - from particle physics. Swapping out IDENTICAL electrons CHANGES the STATE of the electron, and that state is neither physically accessible nor detectable at any level of measurement, microscopically or macroscopically.
hush1
2.3 / 5 (3) Jun 12, 2011
The object, as well as the object's swapped out electron remains undetectable UNCHANGED, yet one state of that electron is CHANGED. A change that can not be detected by any physical means is and remains information for that physical object. The object is no longer in it's previous state. Despite the changed state being undetectable physically. That changed state is called information. A state describable by one of the properties an electron processes. Due to the way the swapped out electron distributes it's energy. The electons' phase changed state.
hush1
3 / 5 (2) Jun 12, 2011
English is not his native language and often is thinking and translating. Stuff gets lost.


I am bilingual. Raised that way. I have ONE language and two channels. If you hear stereo from me, it is cross channeling. Nothing serious. No one has called me incoherent. Yet. And everyone says stuff once in a while. I have a stereo perceptive when viewing words, extra layers of meaning - which, I thought, until now, helps me sort out the word closest to meaning I want to express (in that language).

To no avail. VD remains incomprehensible to me. Inconsistent.
hush1
3 / 5 (2) Jun 12, 2011
I try not to rate discourse. I do sometimes rate humor. No one's rating score is of interest to me.
Ethelred
1 / 5 (1) Jun 12, 2011
Hush1
Define 'Space.'
A set of numbers intended to deal with position. More properly Space-Time. If you think that is circular then just call it a set of numbers which we label as space-time. I can live with it either way.

Define 'fine media'.
Anything that information of any kind is stored in.

Define 'encryption'.
Adding a known sequence of noise to the information to make it look like noise instead of information. Only those that know the exact sequence of noise can extract the original information.

There is no need to define: what is a number? what is a zero?
I cheat. Do you have definitions? Plural. Then you have numbers. It matters not what you used for the numbers as long it can have an order.

definition is only for zero, whatever zero is.
The number one minus itself. Or two minus itself no matter how you are representing one and two. This is for linear numerical sequences as circular sequences aren't fit for our Universe.>>
Ethelred
1 / 5 (1) Jun 12, 2011
Actually the quote, your statement, makes no sense to me
Made sense to me.

You two have started whacking each other instead of discussing. Not surprising with Vendicar's nasty habits of acting arrogant and assuming everyone is a Tard till proven otherwise. This causes him to miss a lot of rational statements because he assumed it made no sense and didn't bother to look harder.

I think most of us do that now and then but Vendicar seems to think it is an art form that needs to be cultivated rather than avoided.

Information is the distribution of energy without a unit of measure
Sorry but that isn't information. You need a unit of measure for that. It can be completely arbitrary BUT it must be agreed upon by all the users of the information.

What is "the absence of matter/energy"?
Think of it as YES or NO. Bit SET or NOT-SET. ONE or ZERO.

The movie Cool Hand Luke nails the two of you with one line.
What we have heah,.. is a failure... to communicate.
>>
Ethelred
1 / 5 (1) Jun 12, 2011
We can change the language if this helps. Are you multilingual
Try math and GEEK speak. Vendicar is trying to use information theory. You don't seem to have got that.

Information is the distribution of energy.
No. Information, in the context of intelligence, is bits or something similar with meaning attached to them by the users. Energy is just the form of the bits OR the CHANGE in the state of the bits. Energy is lost in the change though there supposed to be a way to retain the energy IF the processes involved in the changes are reversible.

the information we call banana.
This a mistake in thinking. Information is what the users store and manipulate. The banana is data that is converted to information. Unless we are talking about a virtual banana in a simulation. Try having a discussion about DNA and information with a ID fan and you will understand where I am coming from on this. The DNA is the data but our transcription of it into CGTA is information.>>
Ethelred
3 / 5 (2) Jun 12, 2011
Swapping out IDENTICAL electrons CHANGES the STATE of the electron
No. If it is an exact swap there is no change of state. All electrons are identical. So far anyway and in theory that should remain true. Replacing them perfectly would be undetectable in practice and theory.

The object is no longer in it's previous state.
No. You don't understand the nature of electrons. They are IDENTICAL and if no change can be detected there is no change. There is nothing to detect because the state is the same in your example. IF you flip spin for instance then you have a change of state AND of information.

Due to the way the swapped out electron distributes it's energy. The electons' phase changed state.
Go read up on this. You are just plain wrong. It has even been supposed that there is only ONE electron and it is just in different places and times. Silly to me but I mention so you get a clue.

IF THE ELECTRON HAS THE SAME STATE THERE IS NO CHANGE IN INFORMATION.

Ethelred
Vendicar_Decarian
1 / 5 (3) Jun 12, 2011
"Define 'Space.' Define 'fine media'. Define 'encryption'." - hush1

Get a dictionary. Child...

"Space makes a fine media. How?" - hush1

It was explained to you.

I will repeat the explanation you chose to ignore.

Define a 0 as the absence of matter/energy over a separation of 1 unit, and a 1 as the absence of matter/energy over a separation of 2 units.

"There is no need to define: what is a number? what is a zero?" - hush1

Not to any thinking person. I take it that you seek to exclude yourself from that set.

"Actually the quote, your statement, makes no sense to me" - hush1

No surprise there.

Clearly you understand very little. So I would keep my mouth shut if I were you. But by all means, ask sentient questions. It is the only way you will learn.
Vendicar_Decarian
1 / 5 (3) Jun 12, 2011
"So then you are a collection of mathematical expressions."

"A conclusion without a premise." - hush1

Fact: You are a collection of fundamental particles.

Your assertion: A fundamental particle is a mathematical expression.

Conclusion: You are a collection of mathematical expressions.

Conclusion1: Consistency of thought isn't one of your strong poitns.

Conclusion2: Simple one step logic eludes you.

Stay on the side lines Chile. You just can't grasp it.
Vendicar_Decarian
1.3 / 5 (3) Jun 12, 2011
"You don't understand what I am getting at." - Ethelred

I was stating that the self awareness of the computer on your desk is in many ways greater to your own awareness of self. It isn't intended as an insult, but as a manner of comparison.

Desktop computers are intimately aware of their own internal states. Remaining memory, peripherals in use, etc. While human awareness as defined by conscious awareness is limited to a tiny number of internal states, although they operate at a higher cognitive level.

Vendicar_Decarian
1 / 5 (3) Jun 12, 2011
"I suspect the entire MultiVerse is exactly that. Including iterative functions." - Ethelred

Don't confuse description (math) with physical existence...

"English is not his native language and often is thinking and translating." - Ethelred

Then he should have said so earlier, and I wouldn't have been so hard on him.

Vendicar_Decarian
1 / 5 (3) Jun 12, 2011
"As far as information is concerned, a machine, (Turing Machine included), can be defined by the way it distributes energy introduced to it." - Hush1

That is pure supposition on your part.

"I gave contra to that assertion: Information is the distribution of energy." - hush1

Your statement is imprecise.

A correct statement would be information is encrypted in a pattern of energy.

"The distribution (of energy) defines the physical." - hush1

Meaningless...

"Yes, a banana is information." - hush1

Then you have lost all meaning to the term "information".

"Swapping out IDENTICAL electrons CHANGES the STATE of the electron" - hush1

Then the electrons were not identical.

Vendicar_Decarian
1 / 5 (3) Jun 12, 2011
"The object, as well as the object's swapped out electron remains undetectable UNCHANGED, yet one state of that electron is CHANGED." - hush1

If the electrons were identical then nothing has changed.

Perhaps you would rather argue that the electrons are not identical.

"The object is no longer in it's previous state. " - hush1

It most certainly is, if the electrons were identical.

The rest of your comment is equally silly.
cockmuffin
2.6 / 5 (5) Jun 12, 2011
VD, quit being such a douchebag.
hush1
1 / 5 (1) Jun 12, 2011
No. If it is an exact swap there is no change of state. All electrons are identical. Eth.


The wave function of fermions, including electrons, is antisymmetric, meaning that it changes sign when two electrons are swapped; that is, (r1, r2) = (r2, r1), where the variables r1 and r2 correspond to the first and second electrons, respectively. Since the absolute value is not changed by a sign swap, this corresponds to equal probabilities.

An antisymmetric wave function for a quantum state of two identical electrons in a 1-dimensional box. If the particles swap position, the wave function inverts its sign.

http://en.wikiped...Electron

Under Quantum Properties.

CSharpner
1 / 5 (1) Jun 12, 2011

Unless you implement artificial time stamps / time keepers for all passed messages


Which is exactly what you do - the computers have schedulers to keep them from crashing by getting into a data gridlock or race conditions etc. because they are not analog machines that can deal with division by zero or other fibs like that.


It's not what *I* do when I write parallel code, unless I need a specific type of syncing. If I was writing AI code and if I wanted randomness, I would make a point of NOT using timestamps. Most parallel code does NOT require that level of syncing. If it did, in many cases, it's not a good candidate for parallelism and would likely be written as a serial operation.

With AI, you generally have lots of loosely sync'd threads and LOTS of external input constantly interacting with internal thoughts. There's plenty of natural randomness to go around.
Vendicar_Decarian
1 / 5 (4) Jun 12, 2011
"The wave function of fermions, including electrons, is antisymmetric, meaning that it changes sign when two electrons are swapped; that is, (r1, r2) = (r2, r1), where the variables r1 and r2 correspond to the first and second electrons, respectively." - hush1

Wrong. That anti-symmetry is what is commonly known as spin, and it can not be stated in general for two arbitrary electrons that their spin states are opposite.

This anti-symmetry can only be definitively presumed if two electrons share their other quantum numbers.

You have also committed another error by presuming that electrons with opposite spin are exactly the same.

By definition they are not since spin is the manner by which they differ.

hush1
1 / 5 (1) Jun 12, 2011
"it can not be stated in general for two arbitrary electrons that their spin states are opposite." -VD

Nor can it be stated in general for two arbitrary electrons that their spin states are identical.

So? What is the point of your first statement?

"You have also committed another error by presuming that electrons with opposite spin are exactly the same." -VD

Wrong again.
I swapped two electrons with opposite spin. And ended up with two electrons that are exactly the same.

I did not assume that electrons with opposite spin are exactly the same.
I assumed that electrons with identical spin are exactly the same. (They are)

"By definition they are not since spin is the manner by which they differ." -VD

Swapping them is the manner by which they are identical.
CSharpner
not rated yet Jun 12, 2011
Let's say your random number generator outputs one truly random 32 bit number, because that's how much difference you can measure from your random particle. It means that your artifical brain can only have 4.3 billion different permutations of states it can exist in.

That would only be true if your artificial brain's state memory were only 32 bits wide. Not even a simple wrist watch computer has a state THAT small. The number of "different permutations" of a state is equal to the largest number that fits into the amount of bits that hold the state. In an AI system, that's roughly the amount of RAM storage. Let's say you've got 4TB. That's 24 terabits. What's the largest number you can store in 24 trillion bits? That's the number of unique machine states that machine supports. The size of any random number generator has nothing to do with the amount of unique machine states the machine supports. You're giving significantly too much credit to a random number in AI.
hush1
1 / 5 (1) Jun 12, 2011
By definition you need two electrons to determine the sign of wave.

Of course, you can do away with the concept of spin all together.
Which is defeating the concept of introducing spin in the first place.
Vendicar_Decarian
1 / 5 (3) Jun 12, 2011
"Nor can it be stated in general for two arbitrary electrons that their spin states are identical." - Hush1

Then your claim that the two electrons that are identical, is false, since you now claim that you can never know.

"So? What is the point of your first statement?"

I was correcting your false claim that electrons with opposite spin were identical.

"And ended up with two electrons that are exactly the same." - Hush

Then they didn't have opposite spin.

"I did not assume that electrons with opposite spin are exactly the same. " - Hush1

Then you shouldn't have argued that two electrons with opposite spin were identical, and you should not have claimed that replacing one electron with another of opposite spin was an act of replacing one electron with one that was identical.

"Swapping them is the manner by which they are identical." - hush1

Two electrons are identical if and only if they have the same quantum numbers. Spin is such a number. Hence your claim of ident is false
Vendicar_Decarian
1 / 5 (3) Jun 12, 2011
Hush1.. You are clearly not willing to debate rationally or honesty. Quit now before you completely ruin your reputation.

"By definition you need two electrons to determine the sign of wave." - hush1

No. You just need a magnetic field or an electric field gradient.

"Of course, you can do away with the concept of spin all together." - Hush1

Only in Tard Land.

hush1
1 / 5 (2) Jun 12, 2011
"I was correcting your false claim that electrons with opposite spin were identical." -VD

Wrong again. I never made that claim.

"Then they didn't have opposite spin." -VD

Wrong again. Before they were swapped they did.

"Then you shouldn't have argued that two electrons with opposite spin were identical,.." -VD

Wrong again. I never argued that.

"and you should not have claimed that replacing one electron with another of opposite spin was an act of replacing one electron with one that was identical." -VD

Wrong again. I never made that claim. The act of swapping makes them identical.

"Two electrons are identical if and only if they have the same quantum numbers. Spin is such a number."

Absolutely correct. And the two electrons that were swapped did share those numbers. Just not at the same time.

"Hence your claim of ident is false"-VD

Hence the claim I never made can never be false.

hush1
1 / 5 (2) Jun 12, 2011
"No. You just need a magnetic field or an electric field gradient."-VD

No. You just need a gradient where you can label what is up or down.

"Only in Tard Land"-VD

Only can be said from those that do or once did inhabit.
Au-Pu
1 / 5 (5) Jun 12, 2011
Dr Ben Goertzel is self deluding as are all the so called AI "developers".
Computers can only operate within the limits of their programming.
Human intellect can find itself in an alien environment, assess that environment and find ways to survive and even flourish in it.
That is because the brain is adaptive.
Also how do you program intuition into a computer.
AI like so much else in the electronics field is a bull shit concept used to extract funds out of gullible politicians who fancy that it could give them some sort of advantage when developed.
Which proves that the politicians are as delusional as the AI developers.
They make a good pair except for the fact that it is taxpayers money they are wasting.
Vendicar_Decarian
2 / 5 (4) Jun 12, 2011
"No. You just need a gradient where you can label what is up or down." - Hush1

Please explain to us what kind of non-electromagnetic gradient you intend to use to detect or influence the spin of electrons.

Look boy.... You are way over your head. You have been spouting vapid nonsense for days.
Vendicar_Decarian
2.6 / 5 (5) Jun 12, 2011
"Computers can only operate within the limits of their programming." - Au-Pu

Since a computer of sufficient size can simulate your brain does that mean that you are wrong in your assertion or that you are also incapable of exceeding the limits of your programming?

You do realize don't you that genetic algorithms have allowed computers to program themselves.

Isaacsname
1 / 5 (1) Jun 12, 2011
Also how do you program intuition into a computer.


Ahh yes, intuition. Is it not correct to assume that when I experience " intuition ", it is in fact my brain calculating probability amplitudes for future events ? "Oooh, I knew I should have picked number 6", is not really "I" knew, it is " my brain knew, and I chose to consciously override the answer my brain provided, through calculations in the sub-conscious.I remain convinced that compared to the sub-conscious mind, the forefront human consciousness is practically as dumb as a bag of hammers, and future efforts to emulate this for the purpose of "AI" will yield null results.

Off topic: Cats and Roombas, opinions or thoughts about this ?
Recovering_Human
5 / 5 (1) Jun 12, 2011
Computers can only operate within the limits of their programming.


So can we; we just have better hardware and more complex software. For now. Again, what if, in some decades, we had the technology required to scan a human brain's structure down to the last cell, model the scan on a computer, and run it (with certain other relatively-minor technicalities taken care of)? Unless you really think there's some magical essence beyond the laws of physics that gives us our intelligence, I don't see how you could argue that a computer as intelligent as a human hadn't been created.
hush1
1 / 5 (1) Jun 12, 2011
lol
Lostland.
Science does not interest you.
Us? lol How many are there of you? And you are projecting.
The best way to stop the game you are playing is to encourage you not to continue playing. So I will encourage you. By not replying to you. Perhaps others will recognize this as well.
That is the least anyone can do for you. And it is honest.
hush1
1 / 5 (1) Jun 12, 2011
Please explain to us what kind of non-electromagnetic gradient you intend to use to detect or influence the spin of electrons.

Look boy.... You are way over your head. You have been spouting vapid nonsense for days. -VD


The response to this is above the quote.
Vendicar_Decarian
2.3 / 5 (6) Jun 12, 2011
"So I will encourage you. By not replying to you." - Hush1

Have you not noticed that your claim of not replying took the form of a reply.

Could anything scream "Tard" more than that?

hush1
1 / 5 (2) Jun 12, 2011
lol
Lostland
Typo correction:

"So I will encourage you. By not further replying to you."
Vendicar_Decarian
2.1 / 5 (7) Jun 12, 2011
And again you have just contradicted yourself by responding. This time with a correction to your previous response in which you said that you would not respond.

I guess that means (Tard)**2
cockmuffin
1 / 5 (2) Jun 12, 2011
And again you have just contradicted yourself by responding. This time with a correction to your previous response in which you said that you would not respond.

I guess that means (Tard)**2

How about (Douchebag)**2 ?
hush1
1 / 5 (2) Jun 12, 2011
lol

One last typo correction:
"I'm a scientist 40 years" When corrected reads...
" "

lol
Lostland
Vendicar_Decarian
1 / 5 (4) Jun 12, 2011
Now (Tard)**3

Vendicar_Decarian
1 / 5 (6) Jun 12, 2011
"How about (Douchebag)**2 ?" - HushPuppet

Please don't hate me because I am vastly smarter than you.
CSharpner
5 / 5 (2) Jun 13, 2011
Dr Ben Goertzel is self deluding as are all the so called AI "developers".
Computers can only operate within the limits of their programming.
Human intellect can find itself in an alien environment, assess that environment and find ways to survive and even flourish in it.
That is because the brain is adaptive.
Also how do you program intuition into a computer.
AI like so much else in the electronics field is a bull shit concept used to extract funds out of gullible politicians who fancy that it could give them some sort of advantage when developed.
Which proves that the politicians are as delusional as the AI developers.
They make a good pair except for the fact that it is taxpayers money they are wasting.

What kind of experience do you have in programming? I've got 29 years experience and I can tell you, as a matter of fact, that software CAN be written to adapt. I write adaptive software every day.
(continued...)
CSharpner
5 / 5 (2) Jun 13, 2011
(continued...)
AI is only limited to the complexity of adaptation by the hardware we're working with and the skills of the programmers. Just because YOU don't understand it, doesn't mean it can't be done. Unless you have some impressive programming skills, don't be telling me what me and my peers can and can't do with computers.
hush1
1 / 5 (1) Jun 13, 2011
@CSharpner
I am aware of (in a limited sense)"adaptive" hardware. (Available circuits switching to other available circuits to optimize the time, (the process or assigned task) at hand.

The 'switch' controlling the 'switch' in hardware, can only be contributed to 'adaptive' software. Is this (simplistic) causal view correct?
CSharpner
3 / 5 (2) Jun 13, 2011
@CSharpner
I am aware of (in a limited sense)"adaptive" hardware. (Available circuits switching to other available circuits to optimize the time, (the process or assigned task) at hand.

The 'switch' controlling the 'switch' in hardware, can only be contributed to 'adaptive' software. Is this (simplistic) causal view correct?


More or less, but I wasn't refering to adaptive hardware. I was refering to software that adapts, with or without adaptive hardware.

Sodtware can emulate adaptive hardware, if it needs to, but that's really outside the scope of my point. Software can adapt to new situations. Actually, MOST software DOES adapt to one degree or another. Human intelligence is vary powerful software with a huge and highly flexible data storage machanism and awesome data querying ability. Intuition can be replicated by by low level routines, running outside the context of the "consciousness" thread(s).
CSharpner
not rated yet Jun 13, 2011
New article posted that fits right in with our discussion on how AI can be highly adaptive.

http://www.physor...lly.html
hush1
1 / 5 (1) Jun 13, 2011
@CSharpner.
A wonderful, thought provoking link. Thks.

Traffic lights on Mars will present an enormous challenge.
Every star and (planets with an atmosphere different from ours), has a unique spectroscopy.
We will have to adapt.
With the right glasses, I'm sure we can make the color of chemicals 'earth-like' no matter what planet we inhabit.
Adaptive spectroscopy for AI is not an area of research, yet.

CSharpner
3 / 5 (2) Jun 13, 2011
You're welcome, but credit the physorg guys for posting it. I'll be happy to take all their credit though! :)
simpletim
not rated yet Jun 14, 2011
If the goal of AGI is to recreate a "human level of intelligence" then it makes sense to me to start small and work up, just like life evolved from small critters into humans.

Also, neurons are in an environment (some animal body) which is itself then in an environment. in a virtual simulation the richness of all three elements can be modified until we achieve a functional virtual organism that acts just like a real one.

Experiments creating virtual nematodes and environments might be a good place to start. Changes to either the neuronal model, the nematode model, or the environment model can be observed to determine effects. This should lead to insights on the level of complexity that needs to be modelled to acheive the desired outcome.

I'm not sure exactly what the desired outcome is, but this can be an evolutionary path towards a range of outcomes.

hush1
1 / 5 (1) Jun 14, 2011
Acoustics is where I started. That was too broad. Sound was the next step backwards for me. Using sound as a platform, I asked myself how any life form can utilize sound. That was too broad. So I asked myself, of all life forms, which of those life forms have known abilities to utilize sound in any manner. That was too broad. The next step backwards was to find life forms that can perform an act labeled or defined as hearing.(Really convenient, as that step has an inanimate analogy - the microphone.) A few more steps backwards and you reach a molecular level. (Really convenient, at that level you still have support from both natural and life sciences) There you realize that C18 H24 O2, (8R,9S,13S,14S,17S)-13-methyl-7,8,9,11,12,13,14,15,16,17-decahydro-6H-cyclopenta[a]phenanthrene-3,17-diol ) is the chemical foundation of hearing and touch. And the origin of the human language. No one will agree with that forward leap. Yet.
There might be a better chemical foundation for human language
re_coyote
not rated yet Jun 14, 2011
Graft vs. Host. . .
hush1
1 / 5 (1) Jun 14, 2011
@rc
lol
The last sentence was rhetorical. Nature selected a foundation that avoids the least complications. For humans at least.
QSO
not rated yet Jun 17, 2011
There is no such thing as objective perspective by virtue of the subject perceiving. The perspective is instantiated in the language utilized to communicate. Noise is unintelligible information, but only to those who can not understand. The top down model with regard to categorization based on prototype analysis might yield subsets well, and bottom up model with regard to the semantic element. The middle out model might handle both well while allowing for growth of perspective. A middlepath, between two pillars.
hush1
1 / 5 (1) Jun 17, 2011
You are close. The wording is such that not everyone will understand what you said.
What other wording best expresses what you just stated. Or simply state an example. For example, the sound of rain is noise. That can not be. If you 'know' the sound 'means' rain, noise is no longer noise at one level of understanding.
hush1
1 / 5 (1) Jun 17, 2011
By the "understanding" I mean the mapping of the association.
hush1
1 / 5 (1) Jun 17, 2011
My point. All along:
http://medicalxpr...ins.html

Human language is no exception.
Even the origin of first mappings.

..."the team activated the electronic device programmed to duplicate the memory-encoding function."

"These integrated experimental modeling studies show for the first time that with sufficient information about the neural coding of memories, a neural prosthesis capable of real-time identification and manipulation of the encoding process can restore and even enhance cognitive mnemonic processes."

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.