Dual-core? Quad-core? Future Computers May Have Hundreds of Processors

Mar 03, 2010

(PhysOrg.com) -- While today's top-line personal computers boast of dual- or quad-core processors to handle complex workloads, experts predict hundreds or even thousands of core processors may be commonplace within the next decade.

That will enable computers to simultaneously perform a vast range of functions only dreamed about today.

But that poses a daunting task for the engineers who must design memory systems to work with these multi-core processors in a quick, energy-efficient and thermally cool manner.

Zhichun Zhu, University of Illinois at Chicago assistant professor of electrical and engineering, has been awarded a five-year, $400,000 National Science Foundation CAREER Award to investigate the architecture for building this next generation of computers.

"We have a lot of challenges facing us," she said. "If each core is running an independent application, each will need a piece of memory to store its data and instructions for the computation."

That is going to require a lot of memory, she said. While today's home computers typically have at least a gigabyte of DRAM -- -- to do the job, tomorrow's computers may need a -- that is a thousand gigabytes -- or more. And the memory will not just be DRAM, but an assortment of types.

Keeping this assortment of memory functioning in a way that doesn't consume vast amounts of power, doesn't overheat, and comes in a compact package as consumers demand will require what Zhu calls universal and scalable memory systems.

"We'll need a new memory architecture that can support diverse memory devices that when put together will work as a whole," said Zhu.

The UIC computer engineer will develop software programs to run simulations that test and validate ways to link diverse memory components that work seamlessly together.

Zhu's grant will support a graduate assistant and will involve undergraduate students who will learn of the problems and potential of the upcoming multi-core era, including the need to write complicated parallel computer programs.

Zhu said parallel computing has been around a long time, but was used mainly by computational scientists at large national laboratories.

"In the future, to get the most performance from personal computers, we'll need to go from sequential to parallel applications," she said. "Maybe all undergraduates will need to learn how to write parallel programming instead of just sequential code."

Explore further: Powerful new software plug-in detects bugs in spreadsheets

Related Stories

Intel Discloses Details of Intel Core Microarchitecture

Mar 07, 2006

Intel Corporation today disclosed details of its forthcoming Intel Core microarchitecture, a new industry–leading foundation for Intel's multi–core server, desktop and mobile processors for computers later ...

Petacache: Use that Memory

Mar 07, 2006

For decades, high energy experimental physicists have struggled with a fundamental problem: they simply have too much data to analyze quickly and in its entirety.

Recommended for you

Researchers developing algorithms to detect fake reviews

Oct 21, 2014

Anyone who has conducted business online—from booking a hotel to buying a book to finding a new dentist or selling their wares—has come across reviews of said products and services. Chances are they've also encountered ...

User comments : 27

Adjust slider to filter visible comments by rank

Display comments: newest first

jamey
2.1 / 5 (7) Mar 03, 2010
Will a normal user actually see any benefit from hundreds of cores? Certainly, in *some* limited jobs, you can use all the cores you can get - see the modern video card. But for most common workloads? I've got a AMD X2 dual-core 3000+ CPU, cranking along at 1800-2200 MHz, and seldom hitting more than 75% utilization of each. And that's with completely separate tasks, which are by nature almost completely parallel (save for kernel interactions for hardware accesses). Feeding those, I've got a terminal app with 4 tabs open, Chrome, XOSview monitoring the system, Seamonkey with 2 dozen tabs and a mail window open, and Firefox with 6 windows and 60-70 tabs. Even when I open Second Life, it's still doing a lot of busy-wait. Kilo-core CPUs? I can't really see them doing me any good. Closest I can figure is real-time raytracing for Second Life, and that's better done on the GPU card.
Omnitheo
5 / 5 (6) Mar 03, 2010
Second Life...

Obviously you're thinking modern technology. You need to look at the future, where things like holography could become common place, and you will require much more powerful machines.

When reading this article, i was thinking robots. Think of asimo, but with dedicated processors for each part of the robot. processors to control the legs, to control the arms, to calculate "thoughts" or appropriate responses. All working independently, but communicating with the master processor and sharing memory to create a uniform machine.
baudrunner
3 / 5 (4) Mar 03, 2010
Omnitheo thinks ahead. Currently, the graphics procesor is task specific, as was the math co-processor during the 386/486 days - and still is. The direct memory access controller can also be considered a task specific processor, so the future of computing will be task-specific CPU's in a multi-core chip. A parallel processing interface manages parallel operations demanded of a single application, so it also will be a task-specific component of the multi-core processor. That is the future of the CPU as I see it. The definition of "core" will undergo its own evolution over time.
jamey
4 / 5 (4) Mar 03, 2010
@Omnitheo - holography won't be that much more CPU intensive than video already is - and again, it's a embarrassingly parallelizable job - see the GPUs of the current day. With robots, you can put completely separate CPUs at each part of the robot, communicating over either a short-range wireless network, or a really high-speed wired bus. Since you're moving macroscopic parts of the robot, feasible speeds are such that communications time won't be significant. Current CPUs are sufficient for this kind of thing - and already are being used that way. Robots are engineering, now, not new research - except for AI aspects, such as vision and reaction to environment. Again, I'm not really seeing that much need for kilo-core CPUs, except in a few cases.
poof
5 / 5 (1) Mar 03, 2010
Hmm lets see, video encoding, realtime pixar-grade graphics, AI, decoding streaming holographic content, @home's being solved overnight, cancer being cured, universe being mapped, god being proven/disproven, just to name a few.
jamey
1.5 / 5 (2) Mar 03, 2010
None of those are average home user applications, except video encoding - and that's less and less needed, as more and more video comes in digital in the first place. Pixar-grade graphics aren't really that great - they look good, but they don't necessarily look all that realistic. Try Final Fantasy: Spirit Within for really good graphics. Admitted, the BOINC stuff would be nice - but that's using surplus CPU cycles - not CPU cycles I *need*.
PedroMann
not rated yet Mar 03, 2010
All I know is we don't have the holodeck yet. Yeah, I think there is still room for improvement.
Nik_2213
1 / 5 (1) Mar 03, 2010
Uh, remember Inmos' Transputer, and the Occam language devised to handle its multiple cores ??

http://en.wikiped...ansputer
Mr_Frontier
not rated yet Mar 03, 2010
Every cell of my skin is notorious for being classified as a mini-CPU. Tell me that isn't enough persuasion to do as much as we can to develop in the same direction, at some point. We have done very well in rapid innovation of semi-conductor technology; can't stop this train now.
PinkElephant
3 / 5 (2) Mar 03, 2010
@jamey,
Again, I'm not really seeing that much need for kilo-core CPUs, except in a few cases.
Try to envision your computer as an intelligent assistant, and an intellectual peer. You'll converse with it, you'll play with it, bounce ideas off it, take its advice, interact with it in organic and unscripted ways. What I'm talking about, is AI taken to levels nonexistent today and only envisioned in sci-fi. To run what's essentially an artificial brain in real-time (carefully side-stepping the issue of enslaving a human-equivalent intelligence), you can easily use up not just thousands, but millions of cores. And that's long before we talk of holodecks or The Matrix...
dirk_bruere
not rated yet Mar 03, 2010
I would not mind betting that the researchers do not come up with anything that wasn't discovered in the 1970s/80s. A *vast* amount of research on parallel processing was done back then.
fixer
not rated yet Mar 04, 2010
Hmm, I can see a computer as an intellectual peer, but how will it see me?
As someone who pays the electricity bill while it chats to its palls on the internet?

I don't like the sound of this!
PinkElephant
5 / 5 (1) Mar 04, 2010
but how will it see me?
Not exactly what you meant, fixer, but here's another great example where massive computational power beyond anything currently available, is required: computer vision. Imagine how many cores it would take to be able to segment, model, track, analyze, and reintegrate a high-resolution video stream in real-time, and with a hefty dollop of machine learning thrown in...
Buyck
not rated yet Mar 04, 2010
Of course we will have more cores! But i think we forgot the 3D chip technology. De vertical stacking of circuits or chips above each other will continu to develop in coming years. That will replace in time the widespread of cores among the surface. 3D chip technology is also faster and consume less power. Although there are some tuff challenges ahead!
gwrede
2.3 / 5 (3) Mar 04, 2010
I don't see the average programmer doing massively parallel code in a long time. Firstly, we tend to think linearly, so already the block diagram of a modestly complicated program is hard to grasp for most. Secondly, the current set of languages simply sucks at PP.

For years to come, PP will stay within the OS and the GUI. Only isolated stuff will be done in parallel, mostly with the GPU.

Thirdly, would an office program get better with massive parallelism? I don't think so. Same with almost any widely used program today. (I know there are /some/ exceptions.)
taka
not rated yet Mar 04, 2010
Of course it is impossible to do massively parallel programming with existing programming languages. But try to imagine a computer as continuous medium that transform data that flows throw it like waves. It will work much better...
Chef
1 / 5 (1) Mar 04, 2010
If this does come to fruition then there really could be a revolution of industry and education among other things. What I could easily see happening for example would be like Tony Starks' AI system he used in Iron Man while building the suit where you would be able to make changes in real time verbally, or even imagine the VI interface from the game Mass Effect. For home education you can simply ask for a lesson in whatever subject you want and have a holographic tutor giving you the lesson and track your progress.
jamey
1 / 5 (1) Mar 04, 2010
Ya'll speak as though the solution to AI is simply throw more silicon at it. It's *NOT*, or we'd have had AIs quite a few years ago. AI is like fusion - it's twenty years away - FOREVER!
DaffyDuck
not rated yet Mar 04, 2010
"Ya'll speak as though the solution to AI is simply throw more silicon at it. It's *NOT*, or we'd have had AIs quite a few years ago. AI is like fusion - it's twenty years away - FOREVER!"

No, true AI probably won't come until a machine reaches the processing power of the human brain which will happen in around 15 years. If Moore's law holds up, we are on track to have the ability to simulate the human brain at the molecular level (calcium channel by calcium channel) around that time. After that, we can eliminate the inefficiencies in the design of the brain and do more with less computing power.

I don’t hold much hope in creating artificial intelligence by just trying to figure out how intelligence works and writing a program to try to mimic it, which is what we've mostly been doing until recently. We are going to have to copy what is known to work and then re-engineer it.
PinkElephant
not rated yet Mar 04, 2010
If Moore's law holds up, we are on track to have the ability to simulate the human brain at the molecular level (calcium channel by calcium channel) around that time. After that, we can eliminate the inefficiencies in the design of the brain and do more with less computing power.
This is where I have a HUGE ethical problem. Isn't what you're proposing, essentially equivalent to vivisection of a human being? I don't have a problem with trying to create a complete simulation of a mouse's brain, but anything approaching human intelligence is incredibly problematic from an ethical POV.
taka
not rated yet Mar 05, 2010
AI has nothing to do with silicon, that for sure. Insect that had only few brain cells had more intelligence that is newer archived by biggest supercomputers capable to simulate all molecules involved in these few sells.
yoowhoo
not rated yet Mar 06, 2010
Kilo-core processors will be needed to break us down into atoms and correctly reconstitute us elsewhere. Beam me up! And when you plug yourself in to the kilo-core computer it will perform untold system checks on you and send out micron sized robots to cure what ails you.
John_balls
not rated yet Mar 06, 2010
If Moore's law holds up, we are on track to have the ability to simulate the human brain at the molecular level (calcium channel by calcium channel) around that time. After that, we can eliminate the inefficiencies in the design of the brain and do more with less computing power.
This is where I have a HUGE ethical problem. Isn't what you're proposing, essentially equivalent to vivisection of a human being? I don't have a problem with trying to create a complete simulation of a mouse's brain, but anything approaching human intelligence is incredibly problematic from an ethical POV.

I'm lost, what's so unethical about it?
PinkElephant
not rated yet Mar 07, 2010
I'm lost, what's so unethical about it?
What's unethical about experimentation on human beings?
GrayMouser
1 / 5 (2) Mar 07, 2010
I would not mind betting that the researchers do not come up with anything that wasn't discovered in the 1970s/80s. A *vast* amount of research on parallel processing was done back then.

Back in the 80s a company (Connection Machine Inc.) created a system with thousands of processors (the Connection Machine) which had reconfigurable interconnects. The company went out of business but wrote a book on their studies. The found that the configuration of the processors (mesh, toroid, hypercube, etc) had no significant effect on how long it took to solve a specific issue. Their conclusion was that we don't understand what is going on within the computer well enough to determine what processor configuration is significant to solving which class of computing problem.

As with others, they found that the increasing number of processors gave diminishing returns in computing power due to data bottlenecks in the system.
taka
not rated yet Mar 30, 2010
Increasing number of processors definitely create data bottlenecks if they tray to make connection from any to any. It is simple math, the demand for interconnect grow exponentially in this cease. The solution is to connect neighbours (or other way relevant ones) only. And of course this demand completely different algorithms and different thinking from programmers, that seems to be the hardest part.
taka
not rated yet Mar 30, 2010
The algorithm must be built as if there exists infinitely many processors (each with local memory only) packed like continuous substance and then the algorithm gets mapped (automatically presumably) onto existing processors and interconnects. Actual interconnect topology is irrelevant (almost), only its dimensions are. If there is more then 3 dimensions it cannot be built as there is only 3d space available to fit processors and interconnect and any attempt to use more will make interconnect to use exponential resources and then it just do not fit into our space any more.

Any algorithm that uses less or equal dimensions then interconnect can be mapped directly. Algorithms that use more dimensions can also be mapped, but they had to be transformed into reduced dimensions first and that means efficiency losses of course, but should be possible.