New research provides effective battle planning for supercomputer war

Nov 11, 2010
This is professor Stephen Jarvis of the University of Warwick. Credit: University of Warwick

New research from the University of Warwick, to be presented at the World's largest supercomputing conference next week, pits China's new No. 1 supercomputer against alternative US designs. The work provides crucial new analysis that will benefit the battle plans of both sides, in an escalating war between two competing technologies.

Professor Stephen Jarvis, Royal Society Industry Fellow at the University of Warwick's Department of Computer Science, will tell some of the 15,000 delegates in New Orleans next week, how general-purpose GPU (GPGPU) designs used in China's 2.5 Petaflops Tianhe-1A fare against alternative designs employed in the US; these use relatively simpler processing cores brought together in parallel by highly-effective and scalable , as seen in the IBM BlueGene architectures.

Professor Jarvis says that:

"The 'Should I buy GPGPUs or BlueGene' debate ticks all the boxes for a good fight. No one is quite sure of the design that is going to get us to Exascale computing, the next milestone in 21st-century computing, one quintillion floating-point operations per second (1018). It's not simply an architectural decision either – you could run a small town on the power required to run one of these supercomputers and even if you plump for a design and power the thing up, programming it is currently impossible."

Professor Jarvis' research uses mathematical models, benchmarking and simulation to determine the likely performance of these future computing designs at scale:

"At Supercomputing in New Orleans we directly compare GPGPU designs with that of the BlueGene. If you are investing billions of Dollars or Yuan in supercomputing programmes, then it is worth standing back and calculating what designs might realistically get you to Exascale, and once you have that design, mitigating for the known risks – power, resilience and programmability."

Professor Jarvis' paper uses mathematical modeling to highlight some of the biggest challenges in the supercomputing war. The first of these is a massive programming/engineering gap, where even the best computer programmers are struggling to use even a small fraction of the computing power that the latest supercomputing designs have and, which will continue to be a problem without significant innovation. Professor Jarvis says:

"if your application fits, then GPGPU solutions will outgun BlueGene designs on peak performance" – but he also illustrates potential pitfalls in this approach – "the Tianhe-1A has a theoretical peak performance of 4.7 Petaflops, yet our best programming code-based measures can only deliver 2.5 Petaflops of that peak, that's a lot of unused computer that you are powering. Contrast this with the Dawn BlueGene/P at Lawrence Livermore National Laboratory in the US, it's a small machine at 0.5 Petaflops peak [performance], but it delivers 0.415 Petaflops of that peak. In many ways this is not surprising, as our current programming models are designed around CPUs."

But the story doesn't end there. "The BlueGene design is not without its own problems. In our paper we show that BlueGenes can require many more processing elements than a GPU-based system to do the same work. Many of our scientific algorithms – the recipes for doing the calculations – just do not scale to this degree, so unless we invest in this area we are just going to end up with fantastic machines that we can not use."

Another key problem identified by the University of Warwick research is the fact that in the rush to use excitingly powerful GPGPUs, researchers have not yet put sufficient energy into devising the best technologies to actually link them together in parallel at massive scales.

Professor Jarvis' modeling found that small GPU-based systems solved problems between 3 and 7 times faster than traditional CPU-based designs. However he also found that as you increased the number of processing elements linked together, the performance of the GPU-based systems improved at a much slower rate than the BlueGene-style machines.

Professor Jarvis concludes that:

"Given the crossroads at which supercomputing stands, and the national pride at stake in achieving Exascale, this design battle will continue to be hotly contested. It will also need the best modeling techniques that the community can provide to discern good design from bad."

Explore further: Oculus unveils new prototype VR headset

More information: A PDF of the paper can be found at: www2.warwick.ac.uk/fac/sci/dcs… tions/pubs/sc-lu.pdf

Related Stories

NVIDIA GPUs power world's fastest supercomputer

Oct 29, 2010

(PhysOrg.com) -- NVIDIA has built the worldэs fastest supercomputer using 7,000 of its graphics processor chips. With a horsepower equivalent to 175,000 laptop computers, its sustained performance is ...

Predictive simulation successes on Dawn supercomputer

Sep 30, 2009

(PhysOrg.com) -- The 500-teraFLOPS Advanced Simulation and Computing program's Sequoia Initial Delivery System (Dawn), an IBM machine of the same lineage as BlueGene/L, has immediately proved itself useful ...

NVIDIA Introduces New Integrated GeForce 9400M GPU

Oct 15, 2008

Demand for better visual computing performance continues to grow as more and more applications tap the massively parallel processing power of the graphics processing unit (GPU) for more than just graphics. ...

Recommended for you

Oculus unveils new prototype VR headset

20 hours ago

Oculus has unveiled a new prototype of its virtual reality headset. However, the VR company still isn't ready to release a consumer edition.

Who drives Alibaba's Taobao traffic—buyers or sellers?

Sep 18, 2014

As Chinese e-commerce firm Alibaba prepares for what could be the biggest IPO in history, University of Michigan professor Puneet Manchanda dug into its Taobao website data to help solve a lingering chicken-and-egg question.

Computerized emotion detector

Sep 16, 2014

Face recognition software measures various parameters in a mug shot, such as the distance between the person's eyes, the height from lip to top of their nose and various other metrics and then compares it with photos of people ...

User comments : 3

Adjust slider to filter visible comments by rank

Display comments: newest first

Husky
not rated yet Nov 11, 2010
So, its really up to Nvidea to improve CUDA software tools for better cluster utillisation
El_Nose
not rated yet Nov 11, 2010
It not JUST CUDA... & CUDA does have a few key items that could be tweaked -namely a process that identifies number of processors/cores/threads that can be handled on each card & their functionality to make algorithmic increases possible without hard coding that information.

But really like the article said -its the paradigmn that's broken. Until you teach ascoteric languages like lisp to everyone or parallel programming is an undergrad course - you need a shift at the educational level funded by the commercial level.

It's like programming based on creationism when you believe in Darwin. You have the creationist tools like threads - but you need darwinian tools like MPI or OpenCL, and languages like UPC, and while CUDA is nice because if you know C you can use it... If it was retooled to be its own language from the ground up for parallel computing and mandating a standard from devices it interfaces with - would make it even more revolutionary!!!

CUDA is awesome start though
El_Nose
not rated yet Nov 11, 2010
Right now it like all languages believe in one core CPU - and you can tweak them to pretend there is a second core there but the language it self doesn't believe it , but it accepts the data coming back from the second source.

I love lampi but it can be restrictive - and i wonder how it could be used on 2+ systems along with CUDA, maybe using a .dll ???

At any rate its not just money but research... and you can't do that if the first time you hear of bitonic search is in grad school. Quad core processors are common and six cores are dropping in price. Intel will have a 8+ core processor out in two years... the time for these tools to be common knowledge is 4 years ago... we are behind the curve so very badly its embarrasing.