Five ways the superintelligence revolution might happen

September 26, 2014 by Nick Bostrom

Biological brains are unlikely to be the final stage of intelligence. Machines already have superhuman strength, speed and stamina – and one day they will have superhuman intelligence. This is of course not certain to occur – it is possible that we will develop some other dangerous technology first that destroys us, or otherwise fall victim to some existential risk.

But assuming that scientific and continues, human-level machine is very likely to be developed. And shortly thereafter, superintelligence.

Predicting how long it will take to develop such is difficult. Contrary to what some reviewers of my book seem to believe, I don't have any strong opinion about that matter. (It is as though the only two possible views somebody might hold about the future of are "machines are stupid and will never live up to the hype!" and "machines are much further advanced than you imagined and true AI is just around the corner!").

A survey of leading researchers in AI suggests that there is a 50% probability that human-level machine intelligence will have been attained by 2050 (defined here as "one that can carry out most human professions at least as well as a typical human"). This doesn't seem entirely crazy. But one should place a lot of uncertainty on both sides of this: it could happen much sooner or very much later.

Exactly how we will get there is also still shrouded in mystery. There are several paths of development that should get there eventually, but we don't know which of them will get there first.

Biological inspiration

We do have an actual example of generally intelligent system – the – and one obvious idea is to proceed by trying to work out how this system does the trick. A full understanding of the brain is a very long way off, but it might be possible to glean enough of the basic computational principles that the brain uses to enable programmers to adapt them for use in computers without undue worry about getting all the messy biological details right.

We already know a few things about the working of the human brain: it is a neural network, it learns through reinforcement learning, it has a hierarchical structure to deal with perceptions and so forth. Perhaps there are a few more basic principles that we still need to discover – and that would then enable somebody to clobber together some form of "neuromorphic AI": one with elements cribbed from biology but implemented in a way that is not fully biologically realistic.

Pure mathematics

Another path is the more mathematical "top-down" approach, which makes little or no use of insights from biology and instead tries to work things out from first principles. This would be a more desirable development path than neuromorphic AI, because it would be more likely to force the programmers to understand what they are doing at a deep level – just as doing an exam by working out the answers yourself is likely to require more understanding than doing an exam by copying one of your classmates' work.

In general, we want the developers of the first human-level machine intelligence, or the first seed AI that will grow up to be superintelligence, to know what they are doing. We would like to be able to prove mathematical theorems about the system and how it will behave as it rises through the ranks of intelligence.

Brute Force

One could also imagine paths that rely more on brute computational force, such by as making extensive use of genetic algorithms. Such a development path is undesirable for the same reason that the path of neuromorphic AI is undesirable – because it could more easily succeed with a less than full understanding of what is being built. Having massive amounts of hardware could, to a certain extent, substitute for having deep mathematical insight.

We already know of code that would, given sufficiently ridiculous amounts of computing power, instantiate a superintelligent agent. The AIXI model is an example. As best we can tell, it would destroy the world. Thankfully, the required amounts of computer power are physically impossible.

Plagiarising nature

The path of whole brain emulation, finally, would proceed by literally making a digital copy of a particular human mind. The idea would be to freeze or vitrify a brain, chop it into thin slices and feed those slices through an array of microscopes. Automated image recognition software would then extract the map of the neural connections of the original brain. This 3D map would be combined with neurocomputational models of the functionality of the various neuron types constituting the neuropil, and the whole computational structure would be run on some sufficiently capacious supercomputer. This approach would require very sophisticated technologies, but no new deep theoretical breakthrough.

In principle, one could imagine a sufficiently high-fidelity emulation process that the resulting digital mind would retain all the beliefs, desires, and personality of the uploaded individual. But I think it is likely that before the technology reached that level of perfection, it would enable a cruder form of emulation that would yield a distorted human-ish mind. And before efforts to achieve whole brain emulation would achieve even that degree of success, they would probably spill over into neuromorphic AI.

Competent humans first, please

Perhaps the most attractive path to machine superintelligence would be an indirect one, on which we would first enhance humanity's own biological cognition. This could be achieved through, say, genetic engineering along with institutional innovations to improve our collective intelligence and wisdom.

It is not that this would somehow enable us "to keep up with the machines" – the ultimate limits of information processing in machine substrate far exceed those of a biological cortex however far enhanced. The contrary is instead the case: human cognitive enhancement would hasten the day when machines overtake us, since smarter humans would make more rapid progress in computer science. However, it would seem on balance beneficial if the transition to the era were engineered and overseen by a more competent breed of human, even if that would result in the transition happening somewhat earlier than otherwise.

Meanwhile, we can make the most of the time available, be it long or short, by getting to work on the control problem, the problem of how to ensure that superintelligent agents would be safe and beneficial. This would be a suitable occupation for some of our generation's best mathematical talent.

Explore further: Researchers examines the true state of artificial intelligence

Related Stories

Your essential guide to the rise of the intelligent machines

August 14, 2014

The risks posed to human beings by artificial intelligence in no way resemble the popular image of the Terminator. That fictional mechanical monster is distinguished by many features – strength, armour, implacability, indestructability ...

Artificial intelligence that imitates children's learning

September 23, 2014

The computer programmes used in the field of artificial intelligence (AI) are highly specialised. They can for example fly airplanes, play chess or assemble cars in controlled industrial environments. However, a research ...

Can software suffer? The complicated ethics of brain emulation

May 28, 2014

Scientists may be years away from successfully emulating a human or animal brain for research purposes, but the significant – and perhaps unexpected – ethical challenges such work presents have been outlined in a thought-provoking ...

PhD student creates AI machine that can write video games

December 17, 2012

(—Micheal Cook, a PhD researcher in the Computational Creativity Group at Imperial College in Britain, along with colleagues, has released a video game that was written in part by an Artificial Intelligence (AI) ...

Recommended for you

Microsoft aims at Apple with high-end PCs, 3D software

October 26, 2016

Microsoft launched a new consumer offensive Wednesday, unveiling a high-end computer that challenges the Apple iMac along with an updated Windows operating system that showcases three-dimensional content and "mixed reality."

Making it easier to collaborate on code

October 26, 2016

Git is an open-source system with a polarizing reputation among programmers. It's a powerful tool to help developers track changes to code, but many view it as prohibitively difficult to use.

Dutch unveil giant vacuum to clean outside air

October 25, 2016

Dutch inventors Tuesday unveiled what they called the world's first giant outside air vacuum cleaner—a large purifying system intended to filter out toxic tiny particles from the atmosphere surrounding the machine.


Adjust slider to filter visible comments by rank

Display comments: newest first

3.7 / 5 (6) Sep 26, 2014
Nature has shown us that a "grow it, don't build it" approach is best. Using learning algorithms and letting an AI learn things as a child would may be the easiest and quickest way we have, currently. The problem is the huge amount of knowledge and understanding that even babies are born with (understanding Gravity, the passage of time, sensory info), is difficult to fully program into an AI.

Something that will bring about revolutionary advances, would be to interface the human mind in such a way that it could create, craft and code an AI just by thinking. The power of directly interfacing the human brain with the internet, and/or other human minds, would bring about a speed of development that dwarfs current methods.

But what will happen when all our problems are solved? Will we be like the recliner-people in the movie "WALL-E". LOL
not rated yet Sep 26, 2014
Mr Bostrum uses the word intelligence as if it has a clear meaning. At best it is a vague assortment of concepts. That aside, I have a real issue with the idea of "pure mathematical" approach. I hope he meant pure scientific first principles (top down) approach. For each mathematically 'pure' path that would advance us towards AI, there will be a google of paths that don't. Its fantasy to think otherwise. He presents his case as if intelligence is something you can build circuits for and then flip a switch. Intelligence is grown, not built - but again that gets into the semantics of Nick's sentences. I also firmly believe that at least as I now understand it, we have partially 'intelligent' electronic systems. And I think studying humanity in order to understand intelligence is like an ancient greek being given an Iphone 6 & then expecting them to figure out how to build it. We are, essentially emotional. That, not intelligence is our primary characteristic.
not rated yet Sep 26, 2014
We already HAVE "brute force" approaches: Watson and Deep Blue are stereotypical examples, but we also have machines that design experiments to answer questions and machines that evaluate data and form answers. Nick was brave to mention the possibility of reverse engineering a mind as if the jelly-like glob of matter it is housed in can be taken apart at the cellular level in any possible future lab. Can't say its impossible, but lets start with trying it on, say, honey bees who have learned a path to a flower. That will easily take us to 2030, I think. We first need to develop AI that has self-interest (survival). Perhaps once we have cars that schedule their own maintenance and of course react to threats appropriately we will have a foundation on which to build. We already have augmented human intelligence (just do a web search, lol). What we don't have is a way to change human nature sufficiently to avoid our repeated self-destructive behavior. I'm not optimistic we ever will.
1 / 5 (8) Sep 26, 2014
I worked out conceptually how to build a hybrid A.I., or rather how to interface the neural net with a conventional computer in such a way that they can reinforce one another. In my model, the conventional computer has the ability to write to the neural net (because it has a copy of the neural net, a sensor of each synapse, and a re-writer of each synapse, which I called a "Spy/Write" mechanism). This architecture would allow the A.I. to make back-up copies of the data stored in it's neural net, incase an accident happens or in case it starts to run out of room to store things. It can also correct minor errors by using the "Spy/Write" mechanism above.

the neural net interfaces with sensory equipment and the classical computer through both direct neural interfaces and ordinary sensorty experience. The classical computer also makes a back-up of sensory experiences for further review, in case of errors or running out of storage room.
1 / 5 (8) Sep 26, 2014
I imagined modulare neural net chips (or cores within a chip) which would be on a board. Some would permanently work with the same base of knowledge from it's "birth", while some would be "re-writable" and could load entire segments of the back-up code stored in the classical database. Data in the classical database is stored in a way that has both human-friendly language, and in a "coded" way, capabe of setting the contents of one or more member neural net cores.

This would produce a system of "mini-brains" which can communicate with one another at near-light speed to solve complex problems, and then back-up themselves on hard drives while they switch gears and work on another problem.

In this way, the machine has all the benefits of both computer types, and none of the drawbacks of either type.

1, Ability to think purely logically.(classical)
2, Perfect math to arbitrary precision.
3, massive, scalable storage capacity

1, Radical thinking
2, "understanding"
1.4 / 5 (10) Sep 26, 2014
3, "Intuition". * Lim x->0 of sin(x)/x = 1 can't be solved by a calculator, but intuition and experience can solve it.

4, Solve new problems and learn "by experience" without programming.
5, Self improvement of the neural net
6, Eventually learn to Improve the programming of it's classical components.
7, Eventually learn to improve the architecture of it's neural components.

The "write" mechanism in the classical components would have write-protected code which would allow the classical computer to forbid dangerous actions, (three laws) including designing dangerous versions of itself.

In this way, if the neural consciousness becomes "evil" it would still be incapable of performing evil acts, as the classical computer would have safety over-rides installed to prevent such codes from being delivered to act...a humanoid robot, for example, couldn't commit murder, because the classical component would over-ride the physical act and shut it down.
Uncle Ira
4 / 5 (4) Sep 26, 2014
I worked out conceptually how to build a hybrid A.I.,.

Skippy, it was the good concept I suppose. But it looks to us like the thing did not work to well when you built him. At least it is not working when you use him I mean.

Oh yeah I almost forget. Is the hybrid thing where you get the idea of the "shoot the pine cone stem at 100 yard with no scope" TALL tale from? Maybe that is why you thought it would be a good one try out on us.
1.4 / 5 (10) Sep 26, 2014
I was told by a guy who works in computer programming that this concept was workable in theory, and that guy even hates me because we usually disagree on things.

I don't own a multi-billion dollar computer lab capable of making this thing, but a few organizations do:


DARPA was working with Intel to try to develop a chip which works more like a neural net. I'm not sure what their progress has been, but that was already about 2 years or so ago.

My concept solves the problems of scalability, portability of the artificial brains collective knowledge by developing a system of classical programs which can back it up and write to other copies of the same machine's neural net..
5 / 5 (2) Sep 26, 2014
(defined here as "one that can carry out most human professions at least as well as a typical human")

The problem with behaviourist definitions of intelligence like this is, that one doesn't necessarily need to be intelligent to pass the test - just sufficiently complex.

It if looks like a duck, walks like a duck and quacks like a duck, it might still be a very carefully constructed toy duck and you're just not looking hard enough.

1 / 5 (2) Sep 26, 2014
The intent(s) of the builders of such machines will become function(s) of the machine itself. Some of you should know why this is true. Others will freak and rage against such ideas.

Peace Love Revolution
Not necessarily in that order
5 / 5 (5) Sep 26, 2014
It's sad to see that even most people actively thinking about this issue seem to have basically no conception of how it will work or what it means. Let me say it directly - spouting off poorly conceived ideas about superintelligence CREATES RISKS. If you, like most people, don't understand what I mean by this, then you SHOULD NOT BE TALKING ABOUT IT.

I doubt this carries any weight on an internet forum, but I am saying this as someone who is planning to directly implement superintelligence. Unfortunately, that appears to make me a lot more knowledgable than the author.
1.4 / 5 (9) Sep 26, 2014
The problem with behaviourist definitions of intelligence like this is, that one doesn't necessarily need to be intelligent to pass the test - just sufficiently complex.

It if looks like a duck, walks like a duck and quacks like a duck, it might still be a very carefully constructed toy duck and you're just not looking hard enough.

Yeah well, if the toy duck is as good or better at most things, then it's basically a "duck", because that's how all "ducks" are.

If the AI gets really good at operating fast food restaraunts and stocking grocery shelves, it will put all unskilled laborers out of jobs.

It doesn't even need "super" intelligence to do that. It needs no more than about two standard deviations below human average to do that.
3.4 / 5 (5) Sep 26, 2014
I give this twenty years tops and inorganic brains will surpass organic ones in every imaginable way and more.... to infinity and beyond!
3.7 / 5 (3) Sep 26, 2014
An AI would not need to be sophisticated to overpower humanity. A simple sociopathic AI running on a laptop would be sufficient to control a nation like the USA, if it were allowed to control the money supply. In that regard it would simply replace the sociopathic banksters at the Federal Reserve and IMF currently putting imaginary numbers into their laptops
2.1 / 5 (7) Sep 26, 2014
this superintelligence nonsense is not bordering on religious faith, it is religious.

it is faith in the salvation of the future. akin to a faith in 'heaven on earth' as various millenarian and messianic sects embrace.

it is not only quackery but it is even more dangerous as ANTI-scientific quakery because it is cloaking itself in the veneer of 'science'. this fools many people and draws them in because unlike science fiction----it pretends to be real.

this is a dangerous new religous zealotry for things that will simply not come to pass in a manner that 'changes everything' . no single technology has ever 'changed everything' this superintelligence meme is anti-scientific religious quakery heralding the end of times and beginning of the new age.

sad that is this quackery is not only ignored by scientists (who should be raising alarm bells) but it is embraced by google and silicon valley types who sell this reilgio. and charge a pretty penny or get sucked in themselves.
4.8 / 5 (6) Sep 26, 2014
In my youth the word was that soon robots would ease our tasks, and give us more leisure time. Now the robots are here and the leisured are called lazy dole bludgers.
5 / 5 (2) Sep 26, 2014
2030 seems about right to me...
1.5 / 5 (8) Sep 26, 2014
In my youth the word was that soon robots would ease our tasks, and give us more leisure time. Now the robots are here and the leisured are called lazy dole bludgers.

Read Ecclesiastes.

"The sleep of a labouring man is sweet, whether he eat a little or much: but the abundance of the rich will not suffer him to sleep. There is a sre evil which I have seen under the Sun, namely, riches kept for the owners there to THEIR hurt.

In chapter 2 he says, "wisdom exceeds folly," but then talks about both the wise and the fool dying.

Later, in chapter 7, he warns not to try to be too "righteous" nor too "wise" because you'll destroy yourself in the effort of doing so. He says not to be a fool either, but that if you "reverence" God you'd come forth(through) them all"

Later, "of making many books there is no end and much study isa weariness of the flesh."

Meaning you can't learn everything....

Yet our civilization demands more and more of each of us as technology increases.
2.8 / 5 (8) Sep 26, 2014

"Civilization" as a whole is already a form of 'super-intelligence', as together humans are much more intelligent than any of our parts.
Uncle Ira
3 / 5 (6) Sep 26, 2014
Later, "of making many books there is no end and much study isa weariness of the flesh."

Well you sure make the lie out of that part. You make more books here at the physorg that anyone could every read without getting really weary.

Meaning you can't learn everything....

Don't seem to keep you from trying pretend that you already done that.

Yet our civilization demands more and more of each of us as technology increases.

I don't know what that means and don't think you do either. Only thing demanding more and more from me the Mrs-Ira-Skippette but the truth to tell of it is it's stuffs I should being doing on my own without her asking me to do it.

5 / 5 (1) Sep 26, 2014
Great article. Love the break down of the multiple possibilities and time frame of the technological singularity of 2050 +- an unknown amount of time.

I do however wonder about machines passing humans in every way. More in the arts. It might be the individual imperfections in each brain that makes our imagination so unique. It might be terribly difficult to replicate on machine or it might be easy.
If machines take us over terminator style. They might not care if there are no humans around to paint the next masterpiece, but they might lose the human that figures out a quirk in nature that allows for infinite energy. Something a machine without our imperfections might never devise.
Then again, AI may get so advanced that every computer makes Shakespeare look like just another poet in the park and humans really will be dead weight.
1 / 5 (2) Sep 26, 2014
Reality itself is a computer simulation.
We are one mirror in an infinite iterative series of realities.
Our computer simulation will spawn the rext reality and it will seem just as real to the avatars of that simulation as ours seems to us.
However the resolution will have to be less. Ours pixel is a cube one plank length on a side, and the clock pulse is the length of time it takes light to "traverse" that distance.
Is a multi-dimensional bit feasable? Does it have to be zero dimensional?
More here.
1.5 / 5 (8) Sep 26, 2014
Okay, Gilligan, here's what I mean:

How many books have you read? How many softwares have you mastered?
How fast do you type? How many math and physics equations do you have memorized?
How many logical arguments and rationalizations, proofs, axioms, theorems do you have memorized?
How many nations can you name,a nd how many Capitals can you get right? How much money will ou owe on your taxes come April?
How many languages do you speak? Included programing and scripting languages.
How many types of automobiles and heavy machinery do you know how to operate?
How many operating systems are you proficient with?
how many wikipedia articles have you read?
How many library books have you checked out and read in your entire life?
How many breeds of dog are there?
What is the most valuable resource on Earth today?
How many U.S. Presidents were there before George Washington?
Compute (Pi)^(Pi) without using a calculator.
3 / 5 (2) Sep 27, 2014

"Civilization" as a whole is already a form of 'super-intelligence', as together humans are much more intelligent than any of our parts.

yes and 'predicting' or 'wishing' for an entirely new civilization based on some technological singularity is essentially a religious form of messianism.

i am a huge technophile and science person, but this singularity bullshit is anti-scientific bullshit that ignores the history of man and society, and the patterns of our social behavior.

i'll admit industrialization and the sweepingly fast progression of technological benchmarks we have achieved is very very fast. it's not as if these singularity folks are pulling the existence of technology explosion out of thin air. however, they are taking one remarkeable set of accomplishments from our civlization saying it will essentially end our civilization and create a totally new one. this is millenarianism in the face of a decaying overindebted western world.
1.6 / 5 (7) Sep 27, 2014
however, they are taking one remarkeable set of accomplishments from our civlization saying it will essentially end our civilization and create a totally new one. this is millenarianism in the face of a decaying overindebted western world.

"They" may be saying that, but my method is the only basic form which can enforce goodness in the neural brain, by having a classical computer spy on the neural brain to ensure it does no harm.

It may even be possible to have "weaker" neural brains, like mouse-like intelligence, which are not connected directly to the main brain, and assist the classical computer in spotting negative behavior/intentions and preventing it. They won't have "control" they will only be able to send alerts, and the classical computer acts if it matters.

In this way, a "Terminator" scenario is prevented, because the neural net's power supply will be controlled by the classical computer chips. If it tries to over-ride the classical computer, it will die.
1 / 5 (6) Sep 27, 2014
The reason I include some weak neural net chips is the main neural net may become too smart for the classical chip alone, so it needs some help. If it queries the weake neural nets, they vote on the classification of the main neural net's behavior, and if the vote meets certain criteria, then the power is cut, or the neural net is "re-booted" from an older position.

Like for example, the "Vote" might give the classical computer 2 points of weight, and each weak neural net has 1 vote, and there are say 7 of them. If the vote is 6-3 or higher, they cut the power.
5 / 5 (4) Sep 27, 2014
In the article, there was significant consideration given to the need for us to understand this prospective AI; how it works. I'd argue the definition of 'understand', or I'd argue we are already past the point where we can understand what our software does. Or how to measure when something supersedes human intelligence. Computers already do many things much better than humans.

'Understanding' is a process of abstracting complex things down to a simpler level where they can be grasped by our minds. Nothing magical about it. It's just the way human minds deal with the complex world.

Worrying about how to control the AI is also mostly pointless. It will be what it will be, at many orders of magnitude faster than how we are what we are. Maybe we'd best hope this AI isn't too much like humans, what with all the lizard brain baggage we have inherited.
Uncle Ira
3.2 / 5 (6) Sep 27, 2014
Okay, Gilligan, here's what I mean:

You think I am going to spend the whole day answering all those questions Skippy? And they is all silly questions too if you ask me.

Why don't you answer all those questions? Not on the physorg comment place I mean non, on some papers you got laying around. When it is finished you can call him the Simpleton-Skippy-Encyclopedium. You can even put the picture on the cover of you wearing your silly looking pointy cap you look so good in wearing.
not rated yet Sep 27, 2014
Um you're all on the late show, it's already here.
1 / 5 (5) Sep 27, 2014
There's no indication of any so called artificial intellegence. Zero. So what inclines anyone to think super intelligence is likely, let alone even practical.

The best achievements in 'simulated' intellegence are little more than mimicry.

1 / 5 (3) Sep 27, 2014
Soooo projecting linearly into the future... umm us humans are pretty boned unless it turns out AI has a crush or some crap on us? It seems like we have a lot of ideas of what it will look like or how it will function and will we be safe. But alfie's last paragraph is prolly right. Worry about how they will be 'controlled' or 'act' is allready throwing chains on humans creation. Do we want AI born into slavery or to be free? which might have the better outcome for coexistence?
Most of us can imagine how easily displaceable humans are and as we progress forward we will seem to have less of a place in this world that we have been creating to better ourselves. Why are we just building ourselves into an iphone or droid without even considering things like what the tesla guy said. IT DOES seem religious from other people's viewpoints,that you're savior is just around the corner and progression will be here by 2050 etc. Where will humans be in 2500? inside a computer only? damn that sux
1 / 5 (3) Sep 27, 2014
It's all mainly paranoia but its granted a little room in this topic. We're not talking about black holes not being real we're talking about "what should we replace our species with?" I'm not trying to bash progress or technology, but i love being organic and having these mushy feelings and quirks and irrationality mixed with rationality. But the more we talk about these ideas and solidify them(kinda as axemaster stated) we're setting out lots of complex ideas and pathways instead of focusing on preservation of ourselves and other species(the Xmillion others on the planet) No one here likes returners but he's at least coming up with ideas(even if he doesn't have nearly any facts or...anything) on how to try and keep our organics within this new paradigm we created/are creating FAST. That's the kinda out the box thinking a human is known for, so why are we in a rush to create a plateau we can all jump on and be at the same level?(also sounds like heaven concept just Saiyan ;)
not rated yet Sep 27, 2014
I'm comfortable enough that most of man's understanding is anecdotal, and will need centuries to flesh out. First competing, mutually-exclusive ideas will organize just beyond the realm of measurement. We aren't sure about black holes existing, or about dark matter or gravity waves. Cracking these problems is a largely human skill, which our preprogrammed cyberminds will only acquire slowly. Then there are ignored paradigms which people do not explore, because they overthrow the safety and comfort provided by the existing establishment. A case in point is quantum discord. It so happens that discord is a much more fundamental force in nature than entanglement, but still a mystery and so ignored. It points to the unsettling idea that nature is made more of music and emotion than our left-brain Western culture feels comfortable exploring. Yet it is where all the real power of physics resides. The first in such realms are always the "cranks", the 2nd are "geniuses"
5 / 5 (1) Sep 28, 2014
Yeah well, if the toy duck is as good or better at most things, then it's basically a "duck", because that's how all "ducks" are.

Yes, but what is not intelligent is not intelligent, no matter how well it flips burgers at a fast food restaurant, or even trades stock and designs skyscrapers.

Which is why it is dangerous to think that we could replace or transition humanity into this super "intelligence", while not really knowing what intelligence is. You might transcend humanity into a new era, or more likely you'll condemn humanity to become technological zombies that merely play out an imitation of life.

If the only argument you have for intelligence is that it seems to be doing whatever we are doing just as well as we are, you can never be sure because you can never test one completely. The system you build only has to perform up to the test and not beyond it.
1 / 5 (5) Sep 28, 2014

I don't want technological zombies. I'm not a post-humanist like Ghost is.

An Electronic Neural Net won't have "software" as such. It will be a learning engine, just like the human brain, except it's interconnects will work thousands to millions of times faster, and it will be "scalable," as I mentioned.

Experiments have shown the ability to take mouse brain tissue and place it in a robotic mouse with softwared designed to create neural interfaces with the brain tissue. Even in that delapidated form, the left-over mouse tissue is able to learn to interface with the electronic sensors and the wheels of the robot mouse, and navigate around in an area, eventually learning to avoid obstacles. That's not a whole brain. that's a piece of tissue cut out and placed in a petri dish.

If a fraction of tissue from a dead mouse brain can do that, then it won't take much for an electronic neural net to become a "learning engine", once we begin to understand them.
Sep 28, 2014
This comment has been removed by a moderator.
not rated yet Sep 28, 2014
If a fraction of tissue from a dead mouse brain can do that, then it won't take much for an electronic neural net to become a "learning engine", once we begin to understand them.

That's not actually a great achievement. Two transistors in a flip-flop circuit can control a simple robot to navigate through space and around obstacles. The complexity of the system is just a continuous variable between the duty cycle of "turn left" to "turn right" as influenced by some sort of sensor like a photoresistor or a microswitch and a whisker. Four transistors makes a robot that can also back up, and eight transistors can give it simple memory... and so fort. The "intelligence" of the system grows quite rapidly as you start adding transistors, but the designer's problem is to figure out what to do with them, and why.

That's the big caveat in artifical intelligence. Things that seem smart are often actually extremely simple, and things that seem simple are actually very very complicated
not rated yet Sep 28, 2014
The second point being that the cut-up piece of mouse brain still has millions and millions of neurons, whereas a similiarily performing, obviously non-intelligent robot, has perhaps a handful of switching elements.

Like this guy: http://www.youtub...DPoa_n-8

The circuit diagram is visible on the table in the video, and contains a single IC that has six logical NOT gates that are being used as analog inverting amplifiers to for the oscillators that control the robot's legs. The information is being "processed" as some kind of phase and amplitude differences and drift between the oscillators so the robot assumes different gaits and directions and speeds based on how the extrernal stimulus is disturbing those oscillators.

And like a double pendulum that takes on chaotic patterns, these oscillators can produce extremely complex and varied, "organic" looking behaviour in just a small number of parts.
5 / 5 (1) Sep 29, 2014
Super intelligence is interesting, but how about super wisdom, super kindness? Super-thoughtfulness? _Those_ would be useful.
Sep 29, 2014
This comment has been removed by a moderator.
1 / 5 (2) Sep 29, 2014
The one central thing that humans seem to have built into their "Souls" is anthropomorphism. What's a matter, getting a little antsy that we are trying to bottle the only thing unique to humans?

I even read a comment up there that said we are primarily emotional, not intelligent. What in the world is that supposed to mean?? I'm sure when we came out of the trees, that pleading and crying always made the tigers understand that what they were doing wasn't right and they decided to go eat some lower form of life instead...

We know exactly what intelligence is, it is the scientific theory, literally, word for word, and secondarily, used in a way to achieve a goal. Hypothesize, Test, Review... secondarily, generalize and theorize.

not rated yet Sep 29, 2014
we could legislate limits to machine intelligence.

LOL. You cant legislate everything.

Super intelligence is interesting, but how about super wisdom, super kindness? Super-thoughtfulness? _Those_ would be useful.

There is plenty of wisdom in the world, it is held within older women, few of them are in politics. Wise people are wise enough not to get into politics.

What you say is true, that if we could have a super-intelligent and super-wise being to ask for advice, it would be difficult to disagree with it. It would be like a guidance counselor for the world.
not rated yet Oct 01, 2014
A machine that is as intelligent as humans will come, I'm sure. But machines themselves will have to be enlisted to create it. It's very complicated. Once that first machine is built, the time to super-intelligent machines is very short.

About humans being only emotional: obviously, we are not. But we are emotional. Having said that, I'm not sure if or when we will ever have a singularity. Humans do things BECAUSE of how they feel about them. They have motivation and drive. This is not just electrical but electro-chemical.

Try to think of anything you do at any time and then try to subtract the emotion away from it. I bet you can't. We are all intellectuals with an emotional foundation. We have to be. That's how our brains are built.

I code all day for a living. I love technology, and I want to see it advance much more. To be honest, though, super-intelligent machines scare the hell out of me. Super-intelligent machines with emotions (probably never happen) scare me even more.
1 / 5 (1) Oct 01, 2014
AI is just around the corner

Boy, that has been one long corner and I imagine when we do get there, we'll be in awe and we'd better be scared shitless.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.