Mastering multicore

Apr 26, 2010 by Larry Hardesty
Graphic: Christine Daniloff

(PhysOrg.com) -- MIT researchers have developed software that makes computer simulations of physical systems run much more efficiently on so-called multicore chips. In experiments involving chips with 24 separate cores -- or processors -- simulations of fluid flows were at least 50 percent more efficient with the new software than they were with conventional software. And that figure should only increase with the number of cores.

Complex computer models — such as atom-by-atom simulations of physical materials, or high-resolution models of weather systems — typically run on multiple computers working in parallel. A software management system splits the model into separate computational tasks and distributes them among the computers. In the last five years or so, as multicore chips have become more common, researchers have simply transferred the old management systems over to them. But John Williams, professor of information engineering in the Department of Civil and Environmental Engineering (CEE), CEE postdoc David Holmes, and Peter Tilke, a visiting scientist in the Department of Earth, Atmospheric and Planetary Sciences, have developed a new management system that exploits the idiosyncrasies of multicore chips to improve performance.

To get a sense of what it might mean to split a model into separate tasks, consider a two-dimensional simulation of a weather system over some geographical area — like the animated weather maps on the nightly news. The simulation considers factors like temperature, humidity and wind speed, as measured at different weather stations, and tries to calculate how they will have changed a few minutes later. Then it takes the updated factors and performs the same set of calculations again, gradually projecting its model out across hours and days.

Changes to the factors in a given area depend on the factors measured nearby, but not on the factors measured far away. So the computational problem can, in fact, be split up according to geographic proximity, with the weather in different areas being assigned to different computers — or cores. The same holds true for simulations of many other physical phenomena.

This video is not supported by your browser at this time.
Video: A computer model simulates the falling of a drop of water by calculating the forces that individual molecules exert on each other. The simulation can be broken into chunks, each representing a cluster of neighboring molecules, that are processed in parallel by different processing units, or “cores.”

Smaller is better

When such simulations run on a cluster of computers, the cluster’s management system tries to minimize the communication between computers, which is much slower than communication within a given computer. To do this, it splits the model into the largest chunks it can — in the case of the weather simulation, the largest geographical regions — so that it has to send them to the individual computers only once. That, however, requires it to guess in advance how long each chunk will take to execute. If it guesses wrong, the entire cluster has to wait for the slowest machine to finish its computation before moving on to the next part of the simulation.

In a multicore chip, however, communication between cores, and between cores and memory, is much more efficient. So the MIT researchers’ system can break a simulation into much smaller chunks, which it loads into a queue. When a core finishes a calculation, it simply receives the next chunk in the queue. That also saves the system from having to estimate how long each chunk will take to execute. If one chunk takes an unexpectedly long time, it doesn’t matter: The other cores can keep working their way through the queue.

Perhaps more important, smaller chunks means that the system is better able to handle the problem of boundaries. To return to the example of the weather simulation, factors measured along the edges of a chunk will affect factors in the adjacent chunks. In a cluster of computers, that means that computers working on adjacent chunks still have to use their low-bandwidth connections to communicate with each other about what’s happening at the boundaries.

Multicore chips, however, have a memory bank called a cache, which is relatively small but can be accessed very efficiently. The MIT researchers’ management system can split a simulation into chunks that are so small that not only do they themselves fit in the cache, but so does information about the adjacent chunks. So a core working on one chunk can rapidly update factors along the boundaries of adjacent chunks.

E pluribus unum

In theory, a single machine with 24 separate cores should be able to perform a simulation 24 times as rapidly as a machine with only one core. In the February issue of Physics Communications, the MIT researchers report that, in their experiments, a 24-core machine using the existing management system was 14 times as fast as a single-core machine; but with their new , the same machine was about 22 times as fast. And, Williams says, the new system’s performance advantage compounds with the number of cores, “like compound interest over time.”

Geoffrey Fox, professor of informatics at Indiana University, says that the MIT researchers’ system is “clever and elegant,” but he has doubts about its broad usefulness. The problems of greatest interest to many scientists and engineers, he says, are so large that they will still require clusters of computers, where the MIT researchers’ system offers scant advantages. “State-of-the-art problems will not run on single machines,” Fox says.

But Holmes points out that the model that he and his colleagues used in their experiments was a simulation of fluid flow through an oilfield, which is of immediate interest to the oilfield services company Schlumberger, which helped fund the research and employs Tilke when he’s not on loan to MIT. “We’re running problems with 50, 60 million particles,” Holmes says, “which is on the order of 20, 30 gigabytes.” Holmes also points out that 24-core computers “will not remain the state of the art for long.” Manufacturers have already announced lines of 128-core computers, and that could just be the tip of the iceberg.

Williams adds that, even for problems that still require clusters of computers, the new system would allow the individual machines within the clusters to operate more efficiently. “Cross-machine communication is one or two orders of magnitude slower than on-machine communication,” Williams says, “so it makes sense to keep cross-machine communication to a minimum, which is what our solution allows.”

Explore further: UT Dallas professor to develop framework to protect computers' cores

More information: Project website: geonumerics.mit.edu/

Related Stories

New software design technique allows programs to run faster

Apr 05, 2010

(PhysOrg.com) -- Researchers at North Carolina State University have developed a new approach to software development that will allow common computer programs to run up to 20 percent faster and possibly incorporate new security ...

Research team 'virtualizes' supercomputer

Jan 20, 2010

A collaboration between researchers at Northwestern University, Sandia National Labs and the University of New Mexico has resulted in the largest-scale study ever done on what many consider an important part of the future ...

Recommended for you

User comments : 5

Adjust slider to filter visible comments by rank

Display comments: newest first

El_Nose
1 / 5 (1) Apr 26, 2010
And as every programmer knows that the speed up of adding cores or processors is limited by bandwith as the article stated and that you will never achieve 1:1 speed up per core except on the most trivail of problems -- the ones easiest to split apart. While I am inclinded to believe MIT researchers are better than most I am curious to see the algorithm they split and still got a 22x speed up over 24 cores that is not a trivial test problem. But given its MIT is will except it and politely review the paper for myself.
CSharpner
1 / 5 (1) Apr 26, 2010
The article is right when it says many OS/s just dump their stuff onto multi-core chips without really doing much (if anything) to optimize performance for multi-core, but putting tasks in a queue to be more efficient is nothing revolutionary either. Dividing tasks to work within the confines of individual cores and caches is not all that revolutionary either. They're both good ideas of course. These are tasks that should be in the heart of either the OS and/or the language compilers with enough runtime variables that the OS can optimize them for the hardware they're currently running on (so a particular program isn't just optimized for one specific hardware spec, but can adapt to different machines with different numbers of processors and cache sizes).

In short, nothing really new here.

But, THIS is a significant step forward:
http://www.physor...136.html
Expiorer
not rated yet Apr 27, 2010
50% increase with 24 cores.
And that figure should only increase with the number of cores.
LOL
Actually 50% increase is very close to SHlT.
My teacher of algorithms said that 95% of software can be made atleast 50% faster on same system (showed many examples).
sender
not rated yet Apr 27, 2010
True hardware concurrency relies on reprogrammable process constructors for direction and parallel bus junctions for hardware synchronicity rather than simple core distribution, it seems that linear functional programming is locked into functional calls rather than process redirection which loses cycles for calls over simply formatting the construct and passively exciting the desired changes in the overlaying active environment.
El_Nose
not rated yet Apr 28, 2010
hey powerup just went through and gave 1's to people without disputing anything stated??

@ Explorer

yes if you are writing very inefficient code.. par tof programming is know the art of using your basic structures. But a 50% increase is positively outstanding if all you did was add cores ... wait till you take a computer architecture course and when you do the math you will understand

More news stories

LinkedIn membership hits 300 million

The career-focused social network LinkedIn announced Friday it has 300 million members, with more than half the total outside the United States.

Impact glass stores biodata for millions of years

(Phys.org) —Bits of plant life encapsulated in molten glass by asteroid and comet impacts millions of years ago give geologists information about climate and life forms on the ancient Earth. Scientists ...

Researchers successfully clone adult human stem cells

(Phys.org) —An international team of researchers, led by Robert Lanza, of Advanced Cell Technology, has announced that they have performed the first successful cloning of adult human skin cells into stem ...