IBM pursues chips that behave like brains

IBM's Cognitive Computing Chip can mimic the way the human brain works
This image obtained courtesy of IBM shows a Cognitive Computing Chip. US computer giant IBM announced on Thursday that it has developed prototypes of computer chips that mimic the way the human brain works.

Computers, like humans, can learn. But when Google tries to fill in your search box based only on a few keystrokes, or your iPhone predicts words as you type a text message, it's only a narrow mimicry of what the human brain is capable.

The challenge in training a computer to behave like a human brain is technological and physiological, testing the limits of computer and brain science. But researchers from Corp. say they've made a key step toward combining the two worlds.

The company announced Thursday that it has built two prototype chips that it says process data more like how humans digest information than the chips that now power PCs and supercomputers.

The chips represent a significant milestone in a six-year-long project that has involved 100 researchers and some $41 million in funding from the government's , or . IBM has also committed an undisclosed amount of money.

The offer further evidence of the growing importance of "," or computers doing multiple tasks simultaneously. That is important for rendering graphics and crunching large amounts of data.

The uses of the IBM chips so far are prosaic, such as steering a simulated car through a maze, or playing Pong. It may be a decade or longer before the chips make their way out of the lab and into actual products.

But what's important is not what the chips are doing, but how they're doing it, says Giulio Tononi, a professor of psychiatry at the University of Wisconsin at Madison who worked with IBM on the project.

The chips' ability to adapt to types of information that it wasn't specifically programmed to expect is a key feature.

"There's a lot of work to do still, but the most important thing is usually the first step," Tononi said in an interview. "And this is not one step, it's a few steps."

Technologists have long imagined computers that learn like humans. Your iPhone or Google's servers can be programmed to predict certain behavior based on past events. But the techniques being explored by IBM and other companies and university research labs around "cognitive computing" could lead to chips that are better able to adapt to unexpected information.

IBM's interest in the chips lies in their ability to potentially help process real-world signals such as temperature or sound or motion and make sense of them for computers.

IBM, which is based in Armonk, N.Y., is a leader in a movement to link physical infrastructure, such as power plants or traffic lights, and information technology, such as servers and software that help regulate their functions. Such projects can be made more efficient with tools to monitor the myriad analog signals present in those environments.

Dharmendra Modha, project leader for IBM Research, said the new chips have parts that behave like digital "neurons" and "synapses" that make them different than other chips. Each "core," or processing engine, has computing, communication and memory functions.

"You have to throw out virtually everything we know about how these chips are designed," he said. "The key, key, key difference really is the memory and the processor are very closely brought together. There's a massive, massive amount of parallelism."

The project is part of the same research that led to IBM's announcement in 2009 that it had simulated a cat's cerebral cortex, the thinking part of the brain, using a massive . Using progressively bigger supercomputers, IBM had previously simulated 40 percent of a mouse's brain in 2006, a rat's full brain in 2007, and 1 percent of a human's cerebral cortex in 2009.

A computer with the power of the is not yet near. But Modha said the latest development is an important step.

"It really changes the perspective from `What if?' to `What now?'" Modha said. "Today we proved it was possible. There have been many skeptics, and there will be more, but this completes in a certain sense our first round of innovation."

Explore further

Intel, IBM roll out new computer network chips

©2011 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.

Citation: IBM pursues chips that behave like brains (2011, August 18) retrieved 21 September 2019 from
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Feedback to editors

User comments

Aug 18, 2011
I hope this wont end up like the damn iPhone autocorrect :D

Aug 18, 2011
Will this new brain be like a beautiful mind? I guess it will be a copy of the researchers personalities at first.
Will it have juridic obligations? Bla bla...
Will it decide to start a revolution or a war? Bla bla...

Aug 18, 2011
Believe it or not, I actually posted THIS EXACT SAME CONCEPT on the main forum like over a year ago, and again several months ago. Ended up getting a month suspension over it.

"The key, key, key difference really is the memory and the processor are very closely brought together. There's a massive, massive amount of parallelism

No wait, FBM, on the regular forum, says this is impossible and would never work. He's a text-book comp sci major. MUST know what he's talking about, right?

Sometimes the experts aren't really experts.

the new chips have parts that behave like digital "neurons" and "synapses" that make them different than other chips. Each "core," or processing engine, has computing, communication and memory functions

Again, discussed this at length months, YEARS ago, and the admin and forum mafia mocked me and even banned me.

FBM, adoucette, and Rpenner couldn't possibly be wrong. IBM and DARPA must be making it up!

Aug 18, 2011
AI's that have actual brains rather than algorithms that are simply governed by code is a scary idea. If governed by code, the most godlike AI could be programmed to love humans before activation. AI's that are not governed by code could use for their own devices the resources we need. Life evolved by chance; technology evolves by design.

Aug 18, 2011
Most robots are used in applications where they couldn't rebel even if they wanted to: lifters, assemblers, loaders, etc, are often mounted in place along an assembly line.

It would be useful to have assembly line robots with perhaps insect to rodent level problem solving, which are capable of solving minor malfunctions and other minor problems on their own, without intervention from maintenance tech or an operator.

This could include:

minor bugs / calibrate it's own hardware
minor bugs / calibrate adjacent assembly line hardware
random alignment problems in and among components to be assembled
Self optimization

If an assembly line robot could run on it's main chip and some combination of software and hardware interface with a "weak" neural net problem solving engine, then it could do both the high-volume "boring" robot work, and solve these minor problems, which plague ordinary automated assembly systems, and without human intervention.

Aug 18, 2011
The company announced Thursday that it has built two prototype chips that it says process data more like how humans digest information than the chips that now power PCs and supercomputers.

Oooookay...that's not much of an achievement (as current chips are nothing like human processnig capabilities)

Otherwise this article is a bit light on information. Basically it looks like they took a chip (which has dedicated areas for memory, processing and communication with the outside) and simply intermixed the three aspects (many small cores each with a little memory and its own communication facility)

Equating this with 'neurons' and 'synapses' is a pretty euphemistic analogy.

Don't get me wrong: It's a good idea because there are many applications that can benefit from massive parallelism. But this architecture gets us not one bit closer to AI, as the implementation difficulties remain exactly the same.

Aug 18, 2011
the new chips have parts that behave like digital "neurons" and "synapses" that make them different than other chips. Each "core," or processing engine, has computing, communication and memory functions

It's pretty clear from this paragraph that it's not "simply" massive parallelism. It clearly says it has parts behaving like digital neurons and synapses.

They are doing precisely what I outlined, and making physical hardware that attempts to emulate the physical brain, at least in some respects.

Difficulties amount to little more than failure of imagination and creativity.

The goal here would be to develop neural net "cards," similar to video cards or ram sticks, which would be plugged into the motherboards of conventional computers via ports, and thereby interface both with one another and the conventional hardware and processors.

The technology to do this already exists. It's just they currently make more money off consumer products, game machines, and services.

Aug 18, 2011
antialias_physorg said, "Otherwise this article is a bit light on information."

This BBC News article has a little bit more information.

Or the original press release:

Rather than being simply 256 cpu's-with-memory on a chip, these chips seem to optimize neural-programming learning-by-pruning algorithms. DARPA has been interested in that topic for many years. These chips may also have I/O optimizations for interconnecting with other chips to build larger networks. Low power is also mentioned.

Aug 18, 2011

Good find...

This is exactly the kind of stuff I was talking about, and FBM mocked me. LOL. Joke's on HIM.

They've implemented this almost EXACTLY like I hypothesized, except my concept was overly idealistic, but whatever...

Gee, that's exactly what I was proposing, through using a system of nano-scale "hubs" to emulate dynamic networking...

The 10 billion neurons, 100 trillion synapses system should be possible and even mass producible within maybe 15 years.

The key will be in how to scale a neural network, or else perhaps how to scale a network of neural networks via normal modularity concepts.

If used as like existing expansion cards which plug into a port, this would be like having the "Brain Trust" on a chip.

It's also clear that this is a HYBRID architecture, which FBM also didn't believe possible. This has both PROGRAMMABLE and LEARNING components, which means it should have the "robot rebellion" fail-safe build right into itself.

Aug 18, 2011
This is just the sort of thing I like, because it represents someone using existing technology to develope entirely new types of computers and devices, instead of just trying to do the same old things better.

Imagine this type of machine being used in image finding for Quality Assurance, it would be able to better understand what constitutes a "bad" label vs a good one, etc.

Security and facial recognition...It could search databases and match faces at airports and depots to help stop muslims and other terrorists. It would have computer speed and "recall," but "intelligent" problem solving and facial matching abilities.

We could even use one of these suckers in like Congress, to have it permanently on the internet AND listening to debates, and it could but in with the real facts of the matter, i.e. "Oh pardon me, Mr. Speaker, but the 'gentleman' from Texas is mistaken about the facts of the issue.".

Aug 18, 2011
A computer moderator that is intelligent enough to discover and recognize the facts, but is not biased by deceptive human concerns or is not greedy or corrupt, it would make an excellent method of exposing the falsehoods of human government.

Of course, we would nto have this governing humans, but the point is it would make an excellent TOOL for forcing humans to be honest in debate if you had a neutral intelligence that only objectively examined, researched, studied, and understood the facts and didn't lie about them the way lobbyists and politicians do.

Additionally, it could even instruct, brief, and de-brief human decision makers about technologies, problems and possible solutions which they may not be aware. A human who accidentally discovers something might not be aware of a seemingly irrelevant application, but an intelligent robot with access to all human knowledge would recognize these "irrelevant" applications, and be able to alert it's owner of these applications...

Aug 18, 2011
Imagine if you instantly got an email any time anyone in the world discovered a technology relevant to your own personal or business interests, or relevant to your field of study, or perhaps even a cross-disciplinary technology that you might not have been interested in, but which turns out to be extremely relevant to what you were doing?

You're working on a nano-computer system,a nd someone else discovered an appropriate power supply that you didn't think of,e tc, it would be useful if the "internet" itself alerted you to this sort of stuff so that you wouldn't overlook it, or wouldn't need discover it "through the grape vine".

It would be like Google, except it automatically learns and searches for possibilities and connections of technology and knowledge without the human needing to enter the parameters, because it would try to learn everything about everything...

Aug 18, 2011
Imagine sending a space probe or a rover with a neural net to Mars or other planets!

it would be able to solve it's own pathfinding to travel between programmed coordinates, instead of human operators planning and ploting a course the way they did with Spirit and Opportunity!

This would allow the robot to cover SCORES even HUNDREDS of times as much terrain during it's mission, AND identify interesting geology or mineralogy ahead of time, with or without the human input.

This would make manned spaceflight absolutely pointless, except for colonization, which it damn near is now anyway, but it would be SOME of the advantages of manned space flight, but none of the drawbacks.

Aug 18, 2011
Techno1/QC last comment: 8/17/11 - 11:27AM, after a blistering pace of 1 post ever 10 minutes or so he goes silent.

Nanobanano account created: 8/17/11 - 13:27PM, and he has set a blistering pace of ~1 post ever 10 minutes or so.

What a coincidence.

Aug 18, 2011
Plus, I nearly forgot the more "human" application of this, such as potentially one day using this as an interface or a "bridge" to patch human brain damage.

The circuitry could adapt itself to replace damaged neurons and synapses to potentially restore primary brain function, and motor control, and sensory perception. It would be like a pacemaker for the brain to repair and bridge injuries, or to enhance our brains like the Borg to have a wet wired neural net connected to the internet interfacing directly with our brains!

Resistance is Futile!

Aug 19, 2011
Thanks for the link Yogaman. Reading the IBM press release I realize this is exactly what I described with some on chip processing.

Aug 21, 2011
And here is another link...

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more