Interview: Dr. Ben Goertzel on Artificial General Intelligence, Transhumanism and Open Source (Part 1/2)

June 10, 2011 by Stuart Mason Dambrot, Phys.org feature
Dr. Ben Goertzel. Photo courtesy: Neural Imprints (http://www.neuralimprints.com/)

(PhysOrg.com) -- Dr. Ben Goertzel is Chairman of Humanity+; CEO of AI software company Novamente LLC and bioinformatics company Biomind LLC; leader of the open-source OpenCog Artificial General Intelligence (AGI) software project; Chief Technology Officer of biopharma firm Genescient Corp.; Director of Engineering of digital media firm Vzillion Inc.; Advisor to the Singularity University and Singularity Institute; Research Professor in the Fujian Key Lab for Brain-Like Intelligent Systems at Xiamen University, China; and general Chair of the Artificial General Intelligence Conference Series. His research work encompasses artificial general intelligence, natural language processing, cognitive science, data mining, machine learning, computational finance, bioinformatics, virtual worlds and gaming and other areas, Dr. Goertzel has published a dozen scientific books, 100+ technical papers, and numerous journalistic articles, and the futurist treatise A Cosmist Manifesto. Before entering the software industry he served as a university faculty in several departments of mathematics, computer science and cognitive science, in the US, Australia and New Zealand.

Dr. Goertzel spoke with Critical Thought’s Stuart Mason Dambrot following his talk at the recent 2011 Transhumanism Meets Design Conference in New York City. His presentation, Designing Minds and Worlds, asked and answered the key questions, How can we design a world (virtual or physical) so that it supports ongoing learning and growth and ethical behavior? How can we design a mind so that it takes advantage of the affordances its world offers? These are fundamental issues that bridge AI, robotics, cyborgics, and game design, sociology and psychology and other areas. His talk addressed them from a cognitive systems theory perspective and discussed how they’re concretely being confronted in his current work applying the OpenCog Artificial General Intelligence system to control game characters in virtual worlds.


This is the first part of a two-part article. The second part is available at http://www.physorg.com/news/2011-06-dr-ben-goertzel-artificial-intelligence_1.html
SM Dambrot: We’re here with Dr. Ben Goertzel, CEO of Novamente, Leader of OpenCog and Chairmen of Humanity+ [at the 2011 Humanity+ Transhumanism Meets Design Conference in New York City]. Thank you so much for your time.

Dr. Goertzel: It’s great to be here.

SM Dambrot: In your very interesting talk yesterday, you spoke about the importance of the relationship between minds and worlds. Could you please expound on that a bit in terms of Artificial General Intelligence?

Dr. Goertzel: As an AGI developer this is a very practical issue which initially presents itself in a mundane form – but many subtle philosophical and conceptual problems are lurking there. From the beginning, when you’re building an AGI system you need that system to do something – and most AI history is about building AI systems to solve very particular problems, like planning and scheduling in a military context, or finding documents online in a Google context, playing chess, and so forth. In these cases you’re taking a very specific environment – a specific set of stimuli --- and some very specific tasks -- and customizing an AI system to do those tasks in that environment, all of which is quite precisely defined. When you start thinking about AGI – Artificial General Intelligence – in the sense of human-level AI – you not only need to think about a broader level of cognitive processes and structures inside the AI’s mind, you need to think about a broader set of tasks and environments for the AI system to deal with.

In the ideal case, one could approach human-level AGI by placing a humanoid robot capable of doing everything a human body can do in the everyday human world, and then the environment is taken care of – but that’s not the situation we’re confronted with right now. Our current robots are not very competent when compared with the human body. They’re better in some ways – such as withstanding extremes of weather that we can’t – but by and large they can’t move around as freely, they can’t grasp things and manipulate objects as well, and so on. Moreover, if you look at the alternatives – such as implementing complex objects and environments in virtual and game worlds – you encounter a lot of limitations as well.

You can also look at types of environments that are very different from the kinds of environments in which humans are embedded. For example, the Internet is a kind of environment that is immense and has many aspects that the everyday pre-Internet human environment doesn’t have: billions of text documents, satellite data from weather satellites, millions of webcams…but when you have a world for the AI that’s so different from what we humans ordinarily perceive, you start to question whether a AI modeled on human cognitive architecture is really suited for that sort of environment.

Initially the matter of environments and tasks may seem like a trivial issue – it may seem that the real problem is creating the artificial mind, and then when that’s done, there’s the small problem of making the mind do something in some environment. However, the world – the environment and the set of tasks that the AI will do – is very tightly coupled with what is going on inside the AI system. I therefore think you have to look at both minds and worlds together.

SM Dambrot: What you’ve just said about minds and worlds reminds me of two things. One is the way living systems evolved – that is, species evolve not in a null context, but rather, as you so well put it, tightly coupled to in this case an environment niche; every creature’s sensory apparatus is tuned that niche, so the mind and world co-evolve. The other is what you mentioned yesterday when discussing virtual and game worlds – that physics engines are not being used in all interactive situations – which leads me to ask you what you think will happen once true AGIs are embodied.

Dr. Goertzel: If we want to, we can make the boundary between the virtual and physical worlds pretty thin. Most roboticists work mostly in robot simulators, and a good robot simulator can simulate a great deal of what the robot confronts in the real world. There isn’t a good robot simulator for walking out in the field with birds flying overhead, the wind, the rain, and so forth – but if you’re talking about what occurs within someone’s house a lot can be accomplished.

It’s interesting to see what robot simulators can and can’t do. If we were trying to simulate the interior of a kitchen, for example, a robot simulator can deal with the physics of chairs and tables, pots and pans, the oven door, and so forth. Current virtual worlds don’t do that particularly well because they only use a physics engine for a certain class of interactions, and generally not for agent-object or agent-agent interactions – but these are just conventional simplifications made for the sake of efficiency, and can be overcome fairly straightforwardly if one wants to expend the computational resources on simulating those details of the environment.

If you took the best current robot simulators, most of which are open source, and integrated them with a virtual world, then you could build a very cool massive multiplayer robot simulator. The reason this hasn’t happened so far is simply that businesses and research funding agencies aren’t interested in this. I‘ve thought a bit about how to motivate work in that regard. One idea is to design a video game that requires physics – for example, a robot wars game in players build robot from spare parts, and the robots do battle. You could also make the robots intelligent and bring some AI into it, which if done correctly would lead to the development of an appropriate cognitive infrastructure.

Having said that, going back to the kitchen – what would current robot simulators not be able to handle, but would have to be newly programmed? Dirt on the kitchen floor, so that in some areas you could slip more than others; baking, so when you mix flour and sugar and put it in the oven, the chemistry which is beyond what any current physics engine can really do; paper burning in the flame of a gas stove; and so on. The open question is how important these bits and pieces of everyday human life are to the development of an intelligence.

There’s a lot of richness in the everyday human world that little kids are fascinated by – fire, cooking, little animals – because this is part of the environmental niche that humans adapted to. Even the best robot simulators don’t have that much richness, so I think that it’s an interesting area to explore. I think we should push simulators as far as we can, create robot simulators with virtual worlds, and so forth – but at the same time I’m interested in proceeding with robotics as well because there’s a lot of richness in the real world and we don’t yet know how to simulate it.

The other thing you have to be careful of is that most of the work done with robots now completely ignores all this richness -- and I’m as guilty of that as anybody, When we use robots in our lab in China do we let the robots roam free in the lab? Not currently. We made a little fenced-off area, we put some toys in it, and we made sure the lighting is OK because the robots we’re using (Aldebaran Nao robots) cost $15,000 and they have a tendency to fall down. It’s annoying when they break – you have to send them back to France to get repaired.

So, given the realities of current robot technology we tend to keep the robots in a simplified environment both for their protection, and so that their sensation and actuation will work better. They work, they’re cool, and they pick up certain objects well – but not most of those in everyday human life. When we fill the robot lab only with objects they can pick up, we’re eliminating a lot of the richness and flexibility a small child has.

SM Dambrot: This raises two more questions: Is cultural specificity required for any given AGI, and is it necessary to imbue an AGI with a sense of curiosity?

Dr. Goertzel: Our fascination with fire is an interesting example. You wonder to what extent it’s driven by pure curiosity versus our actual evolutionary history with fire – something that’s been going on for millions of years. I think our genome is programmed with reactions to many things in our everyday environment which drive curiosity – and fire and cooking are two interesting examples.

Having said that, yes, curiosity is one of the base motivators. We’re already using that fact in our OpenCog work. One of the top-level demands, as we call them, of our system is the ability to experience novelty, to discover new things. There are two demands: to discover new things in the world around it and just have the experience of learning new things internally, which can come through external or internal discovery. So we’ve already programmed things very similar to curiosity as top-level goals of the system. Otherwise you could end up with a boring system that just wanted to get all of its basic needs gratified, and would then just sit there with nothing to do.

SM Dambrot: That’s very interesting – especially the internal novelty drive. That seems even more exciting in terms of any type of AGI analogue to human intelligence, because we spend so much time discovery ideas internally.

Dr. Goertzel: Some people more than others – it’s cultural to some extent. I think we as Westerners spend more time intellectually introspecting than do people from Eastern cultures. Being from a Jewish background, I grew up in a culture particularly inclined towards intellectual introspection and meta-meta-meta thinking.

On a technical level, what we’ve done to inculcate the OpenCog system with a drive for internal novelty and internal learning and curiosity is actually very simple: It’s based on information theory and is related to work by Jürgen Schmidhuber and others on the mathematical formulation of surprise. In an information-theoretic sense, OpenCog is always trying to surprise itself.

SM Dambrot: I recall that when Prof. Schmidhuber was discussing Recursive Neural Networks at Singularity Summit ’09, he talked about how the system looks for that type of novelty in its bit configurations.

Dr. Goertzel: That’s right – and what we do with OpenCog is quite similar to that. These are ideas that I encountered in the 1980s in the domain of music theory, based on Leonard Meyer’s Emotion and Meaning in Music. He was analyzing classical music – Bach, Mozart and so forth – and the idea he came up with was that aesthetically good music is all about the surprising fulfillment of expectations, which I thought was an interesting phrase. Now, if something is just surprising it’s too random, and some modern music can be like that – modern classical music in particular. If something is just predictable –pop music is often like that, and some classical music seems like that – it’s boring. The best music shows you something new yet it still fulfills the theme in a way that you didn’t quite expect to be fulfilled – so it’s even better than if it just fulfilled the theme.

I think that’s an important aesthetic in human psychology, and if you look at the goal system of a system like OpenCog, the system is seeking surprise but it also gets some reward from having its expectations fulfilled. If it can do both of those at once then it’s getting many of its demands fulfilled at the same time, so in principle it should be aesthetically satisfied by the same sorts of things that people are.

This is all at a very vague level, because I don’t think that surprise and fulfillment of expectations are the ultimate equation of aesthetics, music theory or anything else. It’s an interesting guide, though, and it’s interesting to see the same principles seem to hold up for human aesthetics in quite refined domains, and also for guiding the motivations of very simple AI systems in video game type worlds.

SM Dambrot: I’ve been wondering about materials and the structure of those materials. Do you think it’s important or even necessary in any way to have something that is patterned on our neocortical structure – neurons, axons, synapse, propagation – in order to really emulate our cognitive behavior, or not so relevant?

Dr. Goertzel: The first thing I would say is that in my own primary work right now with OpenCog, I’m not trying to emulate human cognition in any detail, so for what I’m trying to do – which is just to make a system that’s as smart as a human in vaguely the same sort of ways that humans are, and then ultimately capable of going beyond human intelligence –I’m almost sure that it’s not necessary to emulate the cognitive structure of human beings. Now, if you ask a different question – let’s say I really want to simulate Ben Goertzel and make a robot Ben Goertzel that really acts, thinks, and hopefully feels like the real Ben Goertzel – to do that is a different proposition and it’s less clear to me how far down one needs to go, in terms of emulating neural structure and dynamics.

In principle, of course, one could simulate all the molecules and atoms in my brain in some kind of computer, be it a classical or quantum computer – so you wouldn’t actually need to get wet and sticky. On the other hand, if you need to go to a really low level of detail, the simulation might be so consumptive of computing power, you might be better off getting wet and sticky with some type of nanobiotech. When you talk about mind uploading, I don’t think we know yet how micro or nano we need to get in order to really emulate the mind of a particular person – but I see that as a somewhat separate project from AGI, where we’re trying to emulate human-like human level intelligence that is not an upload of any particular person. Of course if you could upload a person, that would be one path to a human-level AGI … it’s just that it’s not the path I’m pursuing now, not because it’s uninteresting but I don’t know how to progress directly and rapidly on that right now.

I think I know how to build a human-level thinking machine…I could be wrong, but at least I have a detailed plan, and I think if you follow this plan for, let’s say, a decade, you’d get there. In the case of mind uploading, it seems there’s a large bottleneck of information capture – we don’t currently have the brain scanning methods capable of capturing the structure of an individual human brain with high spatial and temporal accuracy at the same time, and because of that we don’t have the data to experiment with. So if I were going to work on mind uploading, I’d start by trying to design better methods of scanning the brain – which is interesting but not what I’ve chosen to focus on.

SM Dambrot: Regarding uploading, then, how far down do you feel we might have to go? Is imaging a certain level of structure sufficient? Do we have to capture quantum spin states? I ask because Max More mentioned random quantum tunneling in his talk, suggesting that quantum events may be a factor in cryogenically-preserved neocortical tissue.

Dr. Goertzel: I’m almost certain that going down to the level of neurons, synapses and neurotransmitter concentrations will be enough to make a mind upload. When you look at what we know from neuroscience so far -- such as what sorts of neurons are activated during different sorts of memories, the impact that neurotransmitter levels have on thought, and the whole area of cognitive neuroscience -- I think there’s a pretty strong case that neurons and glia and the molecules intervening in interactions between these cells and other things on this level are good enough to emulate thought without having to go down to the level of quarks and gluons, or even (as Dr. Stuart Hameroff suggests) the level of the microtubular structure of the cell walls of the neuron. I wouldn’t say that I know that for certain, but it would be my guess.

From the perspective of cryogenic preservation, you might as well cover all bases and preserve something, so well that even if our current theories of neuroscience and physics turn out to be wrong, you can still revive the person. So from Max More’s perspective as CEO of Alcor, I think he’s right – you need to preserve as much as you can, so as not to make any assumptions that might prevent you from reviving someone.

SM Dambrot: Like capturing a photograph in RAW image format…

Dr. Goertzel: Yes – you want to save more pixels than you’ll ever need just in case. But from the viewpoint of guiding scientific research, I think it’s a fair assumption that the levels currently looked at in cognitive neuroscience are good enough.

This is the first part of a two-part article. The second part is available at http://www.physorg.com/news/2011-06-dr-ben-goertzel-artificial-intelligence_1.html

Explore further: WorldWide Telescope lights up with Kinect

0 shares

Related Stories

WorldWide Telescope lights up with Kinect

April 19, 2011

A few weeks ago Microsoft Research held an event on Microsoft Campus called TechFest. We show a lot of new projects and prototypes from our labs but we keep a lot of stuff behind closed doors. There was one demo that blew ...

Modern society made up of all types

November 4, 2010

Modern society has an intense interest in classifying people into ‘types’, according to a University of Melbourne Cultural Historian, leading to potentially catastrophic life-changing outcomes for those typed – ...

Recommended for you

60 comments

Adjust slider to filter visible comments by rank

Display comments: newest first

Eikka
1.4 / 5 (12) Jun 10, 2011
They're still talking of Intelligence as if it can be replicated by a machine that operates on formal rules.

What I want to know for sure, before calling the machine intelligent, is whether the human brain is fundamentally similiar to the kind of computational mechanism, or whether it employs some other mechanism which isn't computational.

For example, having a mechanism that relies on some sort truely random chaos effect to optimize answers isn't computationable - you can only approximate it and the more precisely you try, the more inefficient the AI becomes - and any computational attempt you may achieve just isn't the same thing.

If your brain is essentially a bag of a billion dice that you throw and see where the numbers fall, assuming that dice are truly random, trying to come up with a pseudo-random analog would not be intelligent in the same sense.
antialias_physorg
4.6 / 5 (11) Jun 10, 2011
They're still talking of Intelligence as if it can be replicated by a machine that operates on formal rules.

Well, the brain works on 'formal rules', too (Electrical/electrochemical ones).

The point whether the mechanism for intelligence is the same in computers or in brains is not really relvant. It's the effect (i.e. 'apparent intelligence') which is what counts.

For example, having a mechanism that relies on some sort truely random chaos effect to optimize answers isn't computationable

I think you are confusing computational and predictable. Building a good random number generator which doesn't rely on pseudo randopm number isn't hard (e.g. use the deacy of some radioactive isotope).
Eikka
1.5 / 5 (8) Jun 10, 2011
And here's why:

If you have a pseudo-random number generator, it works by taking some starting value, such as the number of seconds since 1.1.1970 etc. and can calculate a long list of numbers that have the characteristic distribution of random numbers.

The difference is that once the initial value is chosen, the list of numbers must follow. This creates a problem: a mechanism that is supposed to be random is now pre-determined. Every possible action the mechanism takes based on these numbers can be known beforehand by knowing the initial value.

So, our AI that uses pseudo-randomness is simply a machine that follows a pre-defined program that can be written down as a long list of IF x THEN y GOTO z.

And that is not intelligence. If it was, we'd have to argue that our television or the thermostat in the fridge is intelligent in the same sense as we are - just less so.
Eikka
1 / 5 (3) Jun 10, 2011

I think you are confusing computational and predictable. Building a good random number generator which doesn't rely on pseudo randopm number isn't hard (e.g. use the deacy of some radioactive isotope).


A random number generator that relies on radioactive isotopes is precisely what I want for the analog of a bag of dice.

It is not computational: it doesn't compute anything, it takes something which is (believed to be) truly random and measures it. Now the only question is, does the brain have to have a billion independent random number generators to work, or can it do with only few?
antialias_physorg
4.7 / 5 (3) Jun 10, 2011
So, our AI that uses pseudo-randomness is simply a machine that follows a pre-defined program that can be written down as a long list of IF x THEN y GOTO z.

Not entirely: Modern programs (and all serious AI implementations) work in parallel over several machines.

Unless you implement artificial time stamps / time keepers for all passed messagesthen you get 'real world' influences into the mix (e.g. variable lag between machines) which can quickly lead to a non-deterministic chain of events - even from a precomputable set of pseudo-random numbers.
Eikka
1.8 / 5 (5) Jun 10, 2011

The point whether the mechanism for intelligence is the same in computers or in brains is not really relvant. It's the effect (i.e. 'apparent intelligence') which is what counts.


As per Turing's argument, we cannot distinguish between a sufficiently complex machine that isn't intelligent, and a machine that is.

Apparent intelligence means nothing. The "easiest" way to meet the requirements is to simply throw so much computational power and data at it that you exhaust all the ways we can test the machine, and it's still just a mechanized puppet that says and does everything according to a list of instructions.
antialias_physorg
4.2 / 5 (5) Jun 10, 2011
Now the only question is, does the brain have to have a billion independent random number generators to work, or can it do with only few?

Mathematically the quality of a sequence of random numbers is no better (or worse) if you use one or many such generators.
Eikka
1 / 5 (2) Jun 10, 2011

Unless you implement artificial time stamps / time keepers for all passed messages


Which is exactly what you do - the computers have schedulers to keep them from crashing by getting into a data gridlock or race conditions etc. because they are not analog machines that can deal with division by zero or other fibs like that.
Eikka
1 / 5 (3) Jun 10, 2011
Mathematically the quality of a sequence of random numbers is no better (or worse) if you use one or many such generators.


If you get ten random values generated from a single measurement, then these ten values depend on the starting value. Thus they are linked - if you have one number here, then you must have a certain another number there.

In essence, the whole state of the "brain" is randomized from a single point, why not a single particle, which, if it can really work that way, presents interesting philosophical questions.
Eikka
2 / 5 (4) Jun 10, 2011
And the other problem of the single random number generator is the amount of information you can measure from it.

Let's say your random number generator outputs one truly random 32 bit number, because that's how much difference you can measure from your random particle. It means that your artifical brain can only have 4.3 billion different permutations of states it can exist in.
antialias_physorg
5 / 5 (1) Jun 10, 2011
Which is exactly what you do - the computers have schedulers to keep them from crashing by getting into a data gridlock or race conditions

Actually you don't HAVE to do that (I'm currently designing a software system for another company that works entirely asynchronously without the need of one type of component being aware of timing aspects of any other type of component.)

All you require is a good error checking / validation. But mostly the software doesn't care what happens inwhich order.

If you get ten random values generated from a single measurement, then these ten values depend on the starting value.
I meant with radioactive random number genrators. But even with pseudo random grenators: Knowing the seeds of 10 generators generating one number each is equivalent to knowing the seed of one generator and generating 10 numbers from it.

can only have 4.3 billion different permutations of states it can exist in.

Just genrate more numbers then.
nothingness
5 / 5 (4) Jun 10, 2011
why not a quantum random number generator?
LivaN
not rated yet Jun 10, 2011
For example, having a mechanism that relies on some sort truely random chaos effect to optimize answers isn't computationable


I don't understand.

You say, if mechanism A(human brain) relies on true randomness (TR) at some point, to generate output.
Then this entire process isn't able to be computationalised, because computation cannot generate TR.
But the fact that mechanism A has access to TR (whether it be quantim effects or something as yet discovered), means that there must be some mechanism that affects the physical world, enough so that mechanism A can interact with it, that generates TR. That we use or duplicate if possible.

Why compute randomness when you have it given that the human brain already interacts with a mechanic that gives TR?
El_Nose
not rated yet Jun 10, 2011
Wow you guys went off on some weird tangets...

But if you wanted to introduce true randomness into the system then you could very easily change what type of processor is used. Current CPU's use error detection and correction internally to fix random changes in voltage that occur -- but many FPCPU can be a lot more lienient in this regard. and this also means that they can be a lot faster than current CPU's. So the basic Idea is this -- the CPU every know and again might say 1 1=3 or 5 but this is not what the CPU should be processisng -- it should be linked like a neuron to sensors that use a classical design without fail --- kinda like the human brain can hallucinate, but that does not mean that the eyes are feed it the wrong info it means that the brain is interpreting it wrong.

I would love a grant to persue this sort of work.
George_Rodart
not rated yet Jun 10, 2011
Random numbers seem irrelevant here. Is our theoretical AI deterministic? Will it come to the same result repeatedly in some computational manner? Or will it produce statistical results like quantum phenomena? Even if you create a very smart AIG, one with all the right answers, will it be conscious, that is, self aware?
Isaacsname
1 / 5 (1) Jun 10, 2011
Just what I was thinking about. Sort of. What would have happened in the universe if conscious entities never came into existence ? Would things have progressed, evolved, only to a certain point ? Would evolution have come to a halt ? It seems that the physical evolution of things in the universe could only go so far without conscious life around to manipulate environments in ways nature never intended. Like life itself was necessary to overcome a dead end. I feel almost like we have been thrust into " hyper-evolution " by the advent of consciousness. Are we outpacing our biological ability to evolve with an environment that changes at a far faster pace than what natural evolution normally happens at? Why do we have brains that can learn complex math and physics in the first 3 years of life, but yet we have to go to school to learn maths ? Why is the "I", the conscious idiot, at the forefront of perception? Because a computer cannot have unorthodox thoughts ?
Isaacsname
1 / 5 (1) Jun 10, 2011
Can AI researchers program a computer to fool itself ? Is that a humans-only ability ? I read that an average human brain could be compared to a 160,000 megahertz processor, but yet,consciously, we are thinking very slowly, we use the language we are familiar with speaking outloud, to talk to ourselves in our minds( self-discussion ) or have thoughts. In that respect, we lose to computers by a longshot. I love that the human brain is actually shrinking as we " evolve ", the Corpus Callosum of a Macaque allows communication between hemispheres twice as fast as a human's, but yet we see ourselves as " dominant " in many ways over monkeys. I'd wage that between the de-evolution of the human brain, and the exponential growth predicted by Moore's Law, we should be close to " real " AI sometime soon.

I look forward to the 2nd part of this interview.
ngrailrei
2.3 / 5 (3) Jun 10, 2011
My recent book Deus ex Machina sapiens (available on Amazon) takes issue with the notion that mind or intelligence can be designed, though it can be taught even as it is developing (therefore the work of designers such as Dr. Goertzel in including ethical considerations in their designs, or at let theri design philosophies, is good.) My book argues that intelligence/mind/consciousness have never been and cannot be designed, they can only emerge through evolutionary processes.

Please allow the plug, since it is critically relevant to the discussion.
Isaacsname
not rated yet Jun 10, 2011
" There is NO life form that will give you a 'reason' to contact a higher intelligence, whether AI or not.

Alright, I see, you don't understand. Give me at least ONE reason I can not REFUTE, as to why I need to 'contact' you!

Good Luck! "

I agree, we can't concretely say exactly when , when we can't settle on a definition of what "it" is . As far as the purpose of trying to contact a perceived " higher " lifeform, I would just think it's for the purpose of sharing information. I believe ultimately that we exist only for the self-preservation of information, but only information that serves the greater good of our species. When we leave the physical existence, the only things we leave behind are our bodies and the information that passed through us, ironic that information has no tangible physical characteristics, yet is the only thing that is left behind in the physical universe,sorry a tangent.But how do you give AI a sense of morals, or altruism ?
Eikka
2 / 5 (3) Jun 10, 2011
Why compute randomness when you have it given that the human brain already interacts with a mechanic that gives TR?


It is a question of structure.

If you have to have billions and billions of independent random number generators to get true intelligence, as we assume the human brain to possess, then the truly intelligent machine must also be an analog of this structure.

Generating these independent random numbers and then distributing them through a network of completely deterministic processors is simply inefficient. The machine that behaves like a person might be the size of a city and still not be able to think half as fast as we do.

Knowing the seeds of 10 generators generating one number each is equivalent to knowing the seed of one generator and generating 10 numbers from it.


Yet this is not the same, because those ten generators are independent. At any given time, values A and B are not linked by value C.
Eikka
1.7 / 5 (3) Jun 10, 2011
In essence, the question of linked or independent random values is this: is it possible that a single fundamental particle or some similiar entity could be responsible for the whole behaviour of an intelligent entity of arbitrary size and composition.

(Well, not the -whole- as there must be a mechanical framework that "filters" this randomness to produce the behaviour, like Brownian motion in water, but you get the point)

Let's say it's a single hydrogen atom. Again assuming that intelligence works through a truly random mechanism and I'm not simply mistaken. A single hydrogen atom would be the equivalent of me, and why not you, and everybody else in the world because given enough readings it would provide enough random values to drive the entire humanity, albeit making those readings would take significantly more time and energy than simply having a hundred billion neurons doing the same thing in parallel.
Eikka
2.3 / 5 (3) Jun 10, 2011

can only have 4.3 billion different permutations of states it can exist in.

Just genrate more numbers then.


This is inefficient. What is the point of an artifical intelligence when it may require a hundred billion steps to do the same as the brain does in one step in parallel?

Actually you don't HAVE to do that (I'm currently designing a software system for another company that works entirely asynchronously


Good luck with that. Most AI researchers don't even seem to try, instead arguing that you can simply deterministically compute the entire thing. I want to know if that's true


All you require is a good error checking / validation. But mostly the software doesn't care what happens inwhich order.


If we're talking about the brain, there is no error checking or fallbacks to known working states. Everything just happens and the brain has to deal with it. Errors are a fundamental part of th
blawo
3.7 / 5 (3) Jun 10, 2011
Briliant draft for a new Monty Python sketch.
blawo
1 / 5 (2) Jun 10, 2011
Thanks God the quantum information revolution has started. Good chance we can rid of this materialistic crap very soon!
blawo
2 / 5 (4) Jun 10, 2011
"Good chance we can rid of this materialistic crap very soon!" - blaw

Nonsense. Quantum computers will never be general purpose, and mind isn't a quantum state.


Mind is JUST that. Quantum state.

blawo
1 / 5 (4) Jun 10, 2011
You have as much justification for making that claim as claiming that the mind is a block of cheese.


The cheese does not necessarily include states which cannot be translated into language. Quantum state - per definition - has this inability. While quantum state can be described in language, it cannot be articulated. Which is just - this is the *just* you got in my previous post - which is just precisely our basic problem with conscious phenomena. Inability of translation into words. Why for God sake looking for complicated and never sufficient answers, when nature is that simple? Namely, consciousness is the quantum part of the mind, the part, which cannot be translated into the language - because no quantum state in general can be.
ngrailrei
2.3 / 5 (3) Jun 10, 2011
"My book argues that intelligence/mind/consciousness have never been and cannot be designed, they can only emerge through evolutionary processes." - ngrail

Yes, it is hard to design something that you don't understand.

On the contrary. We do that all the time; at least, we design many things with only a minimal understanding of how they work. But you are in any case ignoring or misunderstanding my point, which is that there are good reasons to believe (and not merely to presume) that mind cannot be programmed, period. To explain that has taken a whole book, so please dont expect me to explain it here.
http://www.amazon...p;sr=8-1
blawo
2.3 / 5 (3) Jun 10, 2011
All states can be translated into language. Information is infinitely transmutable. What are you trying to say?


Tell this to the quantum cryptography people. That you can transmute quantum encrypted photons to classical information and vice versa :) Sorry my fellow, YOU are the guy who write words...

Quantum theory is solid scientific discipline. Terms like "quantum information" and "classical information" have both well defined, physical MEANING, as has the experimentally verified inability of expressing quantum information in classical bits.

Ignoring the truth about physical universe around you is your true right, but then you cannot hope be any longer part of the frontline...
unknownorgin
1 / 5 (1) Jun 11, 2011
I read an artical about scanning a monkey brain while the monkey was looking at an object and they were suprised to see a 3 dimentional image of the object in the monkey brain. As far as I know all of our digital circutry is 2 dimentional like a sheet of paper. 3 dimentional circutry would have an advantage because data is acessable any point to any point and objects seen can be examined in a tactile real world manner just like humans and animals must do.
Ethelred
1 / 5 (1) Jun 11, 2011
Mathematically the quality of a sequence of random numbers is no better (or worse) if you use one or many such generators.
Not true. IF you do it properly a set of generators can generate a number with more precision. If you do it wrong the precision remains the same. Also as you pointed out if you use RG that are interactive but running on different clocks you should should reach a level of true unpredictability.

Perhaps I shouldn't have given a one on that. Sorry.

I meant with radioactive random number generators.
Those don't work the same as pseudo random since you have to wait for them. They are time sensitive.

Ethelred
Ethelred
1 / 5 (1) Jun 11, 2011
Eikka
Which is exactly what you do
Not if you want real unpredictability. If that is what you want you need to have some variance in the timing of different systems. Clearly wait states would be needed to avoid jams.

In essence, the whole state of the "brain" is randomized from a single point,
This is going down a path that has nothing to do with AI. Randomness is a only a tiny part of what could be needed. Fuzzy numbers is much important IF you want to match humans. If you don't want to math humans then I don't think randomness is needed except occasionally.

Ethelred
Ethelred
3 / 5 (2) Jun 11, 2011
as we assume the human brain to possess,
You are assumming this. I see no need excpept to avoid predicitability. Which is needed for competition not for analysis.

Again assuming that intelligence works through a truly random mechanism and I'm not simply mistaken.
I am pretty sure you are at least partly mistaken. Some of human inteligence must be deterministic. Some is fuzzy but fuzzy is not the same as random.

In any case full emulation of humans is not what AI general or otherwise is about.

Most AI researchers don't even seem to try, instead arguing that you can simply deterministically compute the entire thing. I want to know if that's true
Well back to humans. We are NOT deterministic and the parts are not synchronized. Well I am pretty sure on that.>>
Ethelred
2.7 / 5 (3) Jun 11, 2011
My thinking on self awarness, which may not be needed for AI but is for Human inteligence, is that the parts of the brain watch each other. Not all parts watch all parts but some certainly do watch other parts. I can think about what I am thinking about on verbal and non verbal levels at the same time. It suspect that cannot be emulated by a Turing Machine. Only by machines that are NOT synched. Synched machines are ALL Turing machines except for parts that are truly random and only some of what is going on in brains of any kind is truly random.

Ethelred
Isaacsname
1 / 5 (1) Jun 11, 2011
"Mind is JUST that. Quantum state." - blao

You have as much justification for making that claim as claiming that the mind is a block of cheese.

The mind does however consist of a superposition of states. This fact is particularly evident when one considers memory.


Yes ! Exactly. A superposition of states, in constant flux . Never static and always a superposition of fairly precise aproximations. But why is the " I ", the real dummy in the brain, in the driver's seat of the body, or is that an illusion as well ?
Isaacsname
not rated yet Jun 11, 2011
@Isaac

Your body, as well as all things physical, can be cloned.

Actually, a branch of physics, particle physics, prides itself in having elementary particles that are identical to each other.

There is a exception to elementary particles being identical to each other. Being identical depends solely on if you can view the particles as 'isolated' from the 'surroundings'

Returning to your body. If I replace your body with a clone of your body, the 'You', you call 'You', will always lack the information to know the difference between the body I took from you, and the body clone replacement.
(Which only means 'You' are the 'driver' - your seat or body makes no difference)

That is like giving software the ability to distinguish between an exchange of identical hardware.

In particle physics that can be done.
It remains to be seen if AI follows a similar strategy as used in particle physics.


..lost me there......what's a particle ?
Ethelred
3.5 / 5 (4) Jun 12, 2011
Deesky

If you don't understand the conversation don't rank people.

If you DO understand then why aren't in the conversation?

Now if this was brain dead blathering from Marjon I can understand the ones. However this has been an attempt at communication however poorly some have understood it. You included or you would not have ranked me a one on that post.

You too should read the Emperor's New Mind by Penrose. Warning. The book will make your brain hurt if you actually try to follow it. Unless you are a genius. A high genius and a Turing machine fan.

A Turing machine, which in the original thought experiment version was a serial device with paper tape, can emulate any computer presently in commercial use. Only very slowly. Thus a Turing machine is useful for looking at how computing is really done. Any machine that can be emulated by a Turing Machine IS a Turing Machine.

Ethelred
Ethelred
3.7 / 5 (3) Jun 12, 2011
Vendicar
"The physical media is energy." - Ethel
I didn't say.

The OS in the computer I am now using watches over the memory usage of the applications that are running.
Yes. It is a Turing machine.

In comparison your self awareness is minimal.
You don't understand what I am getting at. Try the Emperor's New Mind by Roger Penrose. Turing machines are limited by Godel's Proof. There are things that Turing Machines simply cannot do due the limits on purely logical thinking. Penrose thinks we beat these limits by using quantum methods. I was thinking there might be other ways that can be emulated or perhaps forced by making the making asynchronous to get them out the Turing limits.>>
Ethelred
3 / 5 (2) Jun 12, 2011
What is lacking are the higher order concepts.
I think the wait states involved in an asynchronous machine may help produce that by spending the time to analyze the processes in the other machines. The CPU in your PC is all one machine. None of it is aware of what the other parts are thinking about the parts are merely engaged in control not analysis.

Now what were you saying about information being energy?
He might have been thinking of Maxwell's Demon vs. Information theory. There has been a lot of thinking going in dealing with information vs. energy due to that idea.

Either that or he using energy as a matter/energy to save characters. Or confused. Never quite sure with hush if he saying something significant or trivial in regards to language. That often depends on your point of view and stuff that seems profound to him is often stuff I thought of long ago and has begun seem basic to me.>>
Pyle
3 / 5 (2) Jun 12, 2011
Wow! This is a fun one. I am so sorry I missed out on the early goings. I wish I had a sliver of the intelligence of Dr. Goertzel. (pun...)

Anyway, VD and hush puppy, you guys are great. Your disagreements are all semantics. While this is everything in debate, I think it is very little in reality. I think hush says it best just now:
You want to be argumentative. Only for the sake of argument.

Try some introspection though and see that you are there too.

We can all ask Maxwell's daemon if information = energy. Until the experiment is designed and carried out we can all agree to disagree I think.

Anyway, maybe Dr. G will answer a question about self awareness. I am pretty sure it is required, and is already quite prevalent as VD said quite well, only the higher orders are lacking.

As for randomness, why? So that an AGI can be wrong? The universe introduces enough randomness without designing in errors in thinking. Silly Eikka.
Pyle
3 / 5 (2) Jun 12, 2011
Ethelred I hate you. I spend all this time writing my worthless comment so that you can place yours, with most of my points just above mine. thppppt!

Regarding your counter to VD's awareness point, I humbly disagree. I think VD was on the right track. What is lacking is a general awareness of the computer's relative position in the environment. Ultimately we'll add that and then push into richer and richer environments. AGI environments will be vastly richer than our current human state in short order after their creation.

And of course I don't hate you Eth. Actually quite sorry I interrupted your posts.
Ethelred
1 / 5 (1) Jun 12, 2011
So then you are a collection of mathematical expressions.
I suspect the entire MultiVerse is exactly that. Including iterative functions.

Consistent forms of thought aren't your strong point are they?
English is not his native language and often is thinking and translating. Stuff gets lost.

Ethelred
Ethelred
1 / 5 (1) Jun 12, 2011
Hush1
Define 'Space.'
A set of numbers intended to deal with position. More properly Space-Time. If you think that is circular then just call it a set of numbers which we label as space-time. I can live with it either way.

Define 'fine media'.
Anything that information of any kind is stored in.

Define 'encryption'.
Adding a known sequence of noise to the information to make it look like noise instead of information. Only those that know the exact sequence of noise can extract the original information.

There is no need to define: what is a number? what is a zero?
I cheat. Do you have definitions? Plural. Then you have numbers. It matters not what you used for the numbers as long it can have an order.

definition is only for zero, whatever zero is.
The number one minus itself. Or two minus itself no matter how you are representing one and two. This is for linear numerical sequences as circular sequences aren't fit for our Universe.>>
Ethelred
1 / 5 (1) Jun 12, 2011
Actually the quote, your statement, makes no sense to me
Made sense to me.

You two have started whacking each other instead of discussing. Not surprising with Vendicar's nasty habits of acting arrogant and assuming everyone is a Tard till proven otherwise. This causes him to miss a lot of rational statements because he assumed it made no sense and didn't bother to look harder.

I think most of us do that now and then but Vendicar seems to think it is an art form that needs to be cultivated rather than avoided.

Information is the distribution of energy without a unit of measure
Sorry but that isn't information. You need a unit of measure for that. It can be completely arbitrary BUT it must be agreed upon by all the users of the information.

What is "the absence of matter/energy"?
Think of it as YES or NO. Bit SET or NOT-SET. ONE or ZERO.

The movie Cool Hand Luke nails the two of you with one line.
What we have heah,.. is a failure... to communicate.
>>
Ethelred
1 / 5 (1) Jun 12, 2011
We can change the language if this helps. Are you multilingual
Try math and GEEK speak. Vendicar is trying to use information theory. You don't seem to have got that.

Information is the distribution of energy.
No. Information, in the context of intelligence, is bits or something similar with meaning attached to them by the users. Energy is just the form of the bits OR the CHANGE in the state of the bits. Energy is lost in the change though there supposed to be a way to retain the energy IF the processes involved in the changes are reversible.

the information we call banana.
This a mistake in thinking. Information is what the users store and manipulate. The banana is data that is converted to information. Unless we are talking about a virtual banana in a simulation. Try having a discussion about DNA and information with a ID fan and you will understand where I am coming from on this. The DNA is the data but our transcription of it into CGTA is information.>>
Ethelred
3 / 5 (2) Jun 12, 2011
Swapping out IDENTICAL electrons CHANGES the STATE of the electron
No. If it is an exact swap there is no change of state. All electrons are identical. So far anyway and in theory that should remain true. Replacing them perfectly would be undetectable in practice and theory.

The object is no longer in it's previous state.
No. You don't understand the nature of electrons. They are IDENTICAL and if no change can be detected there is no change. There is nothing to detect because the state is the same in your example. IF you flip spin for instance then you have a change of state AND of information.

Due to the way the swapped out electron distributes it's energy. The electons' phase changed state.
Go read up on this. You are just plain wrong. It has even been supposed that there is only ONE electron and it is just in different places and times. Silly to me but I mention so you get a clue.

IF THE ELECTRON HAS THE SAME STATE THERE IS NO CHANGE IN INFORMATION.

Ethelred
cockmuffin
2.6 / 5 (5) Jun 12, 2011
VD, quit being such a douchebag.
CSharpner
1 / 5 (1) Jun 12, 2011

Unless you implement artificial time stamps / time keepers for all passed messages


Which is exactly what you do - the computers have schedulers to keep them from crashing by getting into a data gridlock or race conditions etc. because they are not analog machines that can deal with division by zero or other fibs like that.


It's not what *I* do when I write parallel code, unless I need a specific type of syncing. If I was writing AI code and if I wanted randomness, I would make a point of NOT using timestamps. Most parallel code does NOT require that level of syncing. If it did, in many cases, it's not a good candidate for parallelism and would likely be written as a serial operation.

With AI, you generally have lots of loosely sync'd threads and LOTS of external input constantly interacting with internal thoughts. There's plenty of natural randomness to go around.
CSharpner
not rated yet Jun 12, 2011
Let's say your random number generator outputs one truly random 32 bit number, because that's how much difference you can measure from your random particle. It means that your artifical brain can only have 4.3 billion different permutations of states it can exist in.

That would only be true if your artificial brain's state memory were only 32 bits wide. Not even a simple wrist watch computer has a state THAT small. The number of "different permutations" of a state is equal to the largest number that fits into the amount of bits that hold the state. In an AI system, that's roughly the amount of RAM storage. Let's say you've got 4TB. That's 24 terabits. What's the largest number you can store in 24 trillion bits? That's the number of unique machine states that machine supports. The size of any random number generator has nothing to do with the amount of unique machine states the machine supports. You're giving significantly too much credit to a random number in AI.
Au-Pu
1 / 5 (5) Jun 12, 2011
Dr Ben Goertzel is self deluding as are all the so called AI "developers".
Computers can only operate within the limits of their programming.
Human intellect can find itself in an alien environment, assess that environment and find ways to survive and even flourish in it.
That is because the brain is adaptive.
Also how do you program intuition into a computer.
AI like so much else in the electronics field is a bull shit concept used to extract funds out of gullible politicians who fancy that it could give them some sort of advantage when developed.
Which proves that the politicians are as delusional as the AI developers.
They make a good pair except for the fact that it is taxpayers money they are wasting.
Isaacsname
1 / 5 (1) Jun 12, 2011
Also how do you program intuition into a computer.


Ahh yes, intuition. Is it not correct to assume that when I experience " intuition ", it is in fact my brain calculating probability amplitudes for future events ? "Oooh, I knew I should have picked number 6", is not really "I" knew, it is " my brain knew, and I chose to consciously override the answer my brain provided, through calculations in the sub-conscious.I remain convinced that compared to the sub-conscious mind, the forefront human consciousness is practically as dumb as a bag of hammers, and future efforts to emulate this for the purpose of "AI" will yield null results.

Off topic: Cats and Roombas, opinions or thoughts about this ?
Recovering_Human
5 / 5 (1) Jun 12, 2011
Computers can only operate within the limits of their programming.


So can we; we just have better hardware and more complex software. For now. Again, what if, in some decades, we had the technology required to scan a human brain's structure down to the last cell, model the scan on a computer, and run it (with certain other relatively-minor technicalities taken care of)? Unless you really think there's some magical essence beyond the laws of physics that gives us our intelligence, I don't see how you could argue that a computer as intelligent as a human hadn't been created.
cockmuffin
1 / 5 (2) Jun 12, 2011
And again you have just contradicted yourself by responding. This time with a correction to your previous response in which you said that you would not respond.

I guess that means (Tard)**2

How about (Douchebag)**2 ?
CSharpner
5 / 5 (2) Jun 13, 2011
Dr Ben Goertzel is self deluding as are all the so called AI "developers".
Computers can only operate within the limits of their programming.
Human intellect can find itself in an alien environment, assess that environment and find ways to survive and even flourish in it.
That is because the brain is adaptive.
Also how do you program intuition into a computer.
AI like so much else in the electronics field is a bull shit concept used to extract funds out of gullible politicians who fancy that it could give them some sort of advantage when developed.
Which proves that the politicians are as delusional as the AI developers.
They make a good pair except for the fact that it is taxpayers money they are wasting.

What kind of experience do you have in programming? I've got 29 years experience and I can tell you, as a matter of fact, that software CAN be written to adapt. I write adaptive software every day.
(continued...)
CSharpner
5 / 5 (2) Jun 13, 2011
(continued...)
AI is only limited to the complexity of adaptation by the hardware we're working with and the skills of the programmers. Just because YOU don't understand it, doesn't mean it can't be done. Unless you have some impressive programming skills, don't be telling me what me and my peers can and can't do with computers.
CSharpner
3 / 5 (2) Jun 13, 2011
@CSharpner
I am aware of (in a limited sense)"adaptive" hardware. (Available circuits switching to other available circuits to optimize the time, (the process or assigned task) at hand.

The 'switch' controlling the 'switch' in hardware, can only be contributed to 'adaptive' software. Is this (simplistic) causal view correct?


More or less, but I wasn't refering to adaptive hardware. I was refering to software that adapts, with or without adaptive hardware.

Sodtware can emulate adaptive hardware, if it needs to, but that's really outside the scope of my point. Software can adapt to new situations. Actually, MOST software DOES adapt to one degree or another. Human intelligence is vary powerful software with a huge and highly flexible data storage machanism and awesome data querying ability. Intuition can be replicated by by low level routines, running outside the context of the "consciousness" thread(s).
CSharpner
not rated yet Jun 13, 2011
New article posted that fits right in with our discussion on how AI can be highly adaptive.

http://www.physor...lly.html
CSharpner
3 / 5 (2) Jun 13, 2011
You're welcome, but credit the physorg guys for posting it. I'll be happy to take all their credit though! :)
simpletim
not rated yet Jun 14, 2011
If the goal of AGI is to recreate a "human level of intelligence" then it makes sense to me to start small and work up, just like life evolved from small critters into humans.

Also, neurons are in an environment (some animal body) which is itself then in an environment. in a virtual simulation the richness of all three elements can be modified until we achieve a functional virtual organism that acts just like a real one.

Experiments creating virtual nematodes and environments might be a good place to start. Changes to either the neuronal model, the nematode model, or the environment model can be observed to determine effects. This should lead to insights on the level of complexity that needs to be modelled to acheive the desired outcome.

I'm not sure exactly what the desired outcome is, but this can be an evolutionary path towards a range of outcomes.

re_coyote
not rated yet Jun 14, 2011
Graft vs. Host. . .
QSO
not rated yet Jun 17, 2011
There is no such thing as objective perspective by virtue of the subject perceiving. The perspective is instantiated in the language utilized to communicate. Noise is unintelligible information, but only to those who can not understand. The top down model with regard to categorization based on prototype analysis might yield subsets well, and bottom up model with regard to the semantic element. The middle out model might handle both well while allowing for growth of perspective. A middlepath, between two pillars.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.