HP Labs award will lay groundwork for next generation computers

Sep 17, 2008
The power of exascale computing will enable weather simulations to operate at finer resolution. Georgia Tech researchers recently received an award to help solve some of the key problems in developing exascale machines. Image courtesy of NOAA

While most personal computers today can process a few hundred thousand calculations per second, computer scientists are laying the groundwork for exascale machines that will process more than a million trillion – or 10^18 – calculations per second. Just a few months ago, scientists reached the long-sought-after high-performance computing milestone of one petaflop by processing more than a thousand trillion – or 10^15 – calculations per second.

"The need for exascale-sized machines is well-established," said Karsten Schwan, a professor in the School of Computer Science in the College of Computing at the Georgia Institute of Technology. "With exascale machines, weather simulations will be able to operate at finer resolution, biologists will be able to model more complex systems, and businesses will be able to run and manage many applications at the same time on a single large machine."

Schwan recently received a 2008 HP Labs Innovation Research Award to work with HP Labs, HP's central research arm, to help solve some of the key problems in developing exascale machines. The high-impact research award, one of only two granted for exascale research and 41 granted overall to professors around the world, encourages open collaboration with HP Labs. The award amount is renewable for a total of three years based on research progress and HP business requirements.

With the petaflop barrier broken, researchers like Schwan are focusing on the next goal – improving that processing power a thousandfold to reach the exascale. Schwan's expertise in high performance and enterprise computing will help him solve some of the challenges surrounding exascale systems.

"We believe that machines will reach exascale size only by combining common chips – such as quad core processors – with special purpose chips – such as graphics accelerators," said Schwan, who is also director of the Georgia Tech Center for Experimental Research in Computer Systems (CERCS).

A challenge that arises from this scenario is how to efficiently run programs on these heterogeneous many-core chips. To investigate possible methods for doing this, Schwan will team with Georgia Tech School of Electrical and Computer Engineering professor Sudhakar Yalamanchili, an expert in heterogeneous many-core platforms.

Exascale machines must also be able to run multiple systems and applications on a single platform at the same time, while guaranteeing that they will not interfere with each other. An approach called virtualization may help solve this challenge by hiding some of the underlying computer architecture issues from applications.

"With virtualization, decisions have to be made about where, when and for how long certain programs should run, but there are many ways of determining what might be appropriate because there might be multiple goals," explained Schwan. "For instance, one might want to minimize the exascale machine's power consumption while at the same time meet some performance goal for the application. In other words, virtualized systems must be actively 'managed' to attain end user, institutional or corporate goals."

Ada Gavrilovska, a specialist in virtualization and multi-core operation and research scientist in the School of Computer Science in the College of Computing, will collaborate with Schwan to determine how to manage multiple programs on exascale machines that consist of hundreds of thousands of processors.

Though exascale machines are high-performance computing systems, the vision for these future systems goes beyond the typical vision painted for high performance computing. Instead of scaling a single program to run on hundreds of thousands of cores, exascale systems will also be used to run multiple programs on a single large machine.

"This future virtualized and managed exascale system will guarantee some level of service even when parts of the machine get too loaded or too hot or fail, since applications can be moved while they are running," said Schwan.

Though it will be several years before exascale systems are developed, scientists at Georgia Tech will use the HP Labs Innovation Research Award to lay the foundation for solving emerging science and engineering challenges in national defense, energy assurance, advanced materials and climate.

"Around the world, HP partners with the best and the brightest in industry and academia to drive open innovation and set the agenda for breakthrough technologies that are designed to change the world," said Prith Banerjee, senior vice president of research at HP and director of HP Labs. "HP Labs' selection of Karsten Schwan for a 2008 Innovation Award demonstrates outstanding achievement and will help accelerate HP Labs' global research agenda in pursuit of scientific breakthroughs."

Source: Georgia Institute of Technology

Explore further: Faster computation of electromagnetic interference on an electronic circuit board

add to favorites email to friend print save as pdf

Related Stories

Intel unveils Knights Corner - 1 teraflop chip

Nov 17, 2011

(PhysOrg.com) -- Rajeeb Hazra, Intel’s general manger of technical computing, surprised a group attending this year’s SC11 conference, at a steak house in Seattle this past week, by holding up ...

PRIMEHPC FX10 supercomputer wins crown for Fujitsu

Nov 08, 2011

(PhysOrg.com) -- Fujitsu yesterday announced a new commercial supercomputer, the PRIMEHPC FX10. The announcement is a stunner because of its specs, and right on the heels of this month’s other Fujitsu ...

Learning molecular models from data

Jan 14, 2014

Dr. Heinz Koeppl is part of a new team of scientists at IBM's Zurich research lab focused on systems biology and he is not afraid to claim that one day, soon, advanced biological processes, like cell mitosis, will be represented ...

Seeking out silent threats to simulation integrity

Sep 11, 2013

Large-scale computing has become a necessity for solving the nation's most intractable problems. Due to their sheer number of cores, high-end computers increasingly exhibit intermittently incorrect behaviors—referred ...

Recommended for you

A smart prosthetic knee with in-vivo diagnoses

Apr 22, 2014

The task was to develop intelligent prosthetic joints that, via sensors, are capable of detecting early failure long before a patient suffers. EPFL researchers have taken up the challenge.

Old tires become material for new and improved roads

Apr 22, 2014

(Phys.org) —Americans generate nearly 300 million scrap tires every year, according to the Environmental Protection Agency (EPA). Historically, these worn tires often end up in landfills or, when illegally ...

Students take clot-buster for a spin

Apr 21, 2014

(Phys.org) —In the hands of some Rice University senior engineering students, a fishing rod is more than what it seems. For them, it's a way to help destroy blood clots that threaten lives.

User comments : 0

More news stories

Amazon Prime wins streaming deal with HBO

Amazon scored a deal Wednesday to distribute old shows from premium cable TV channel HBO to its monthly Prime subscribers, landing a blow on rival Netflix in the streaming video battle.

Is nuclear power the only way to avoid geoengineering?

"I think one can argue that if we were to follow a strong nuclear energy pathway—as well as doing everything else that we can—then we can solve the climate problem without doing geoengineering." So says Tom Wigley, one ...