Supercomputing on the XPRESS track: Sandia aims to create exascale computing operating system

December 20, 2012 by Neal Singer

(—In the stratosphere of high-performance supercomputing, a team led by Sandia National Laboratories is designing an operating system that can handle the million trillion mathematical operations  per second of future exascale computers, and then create prototypes of several programming components.

Called the XPRESS project (eXascale and System Software), the effort to achieve a major milestone in million-trillion-operations-per-second supercomputing is funded at $2.3 million a year for three years by DOE's Office of Science. The team includes Indiana University and Louisiana University; the universities of North Carolina, Oregon and Houston; and Oak Ridge and Lawrence Berkeley national laboratories. Work began Sept. 1.

"The project's goal is to devise an innovative and associated components that will enable exascale computing by 2020, making contributions along the way to improve current petaflop (a million billion operations a second) systems," said Sandia program lead Ron Brightwell.

Scientists in industry and in believe that exascale computing speeds will more accurately simulate the most complex reactions in such fields as , and chemistry and biology, but enormous preparation is necessary before the next generation of supercomputers can achieve such speeds.

"System software on today's computers is largely based on ideas and technologies developed more than twenty years ago, before processors with hundreds of computing cores were even imagined," said Brightwell. "The XPRESS project aims to provide a system software foundation designed to maximize the performance and scalability of future large-scale , as well as enable a new approach to the science and engineering applications that run on them."

Current supercomputers operate through a method called parallel processing, in which individual chips work out parts of a problem and contribute results in an order controlled by a master program, much like the output of instruments in an orchestra is controlled by a conductor. Chip speed itself thus plays a less important role than the ability to synchronize individual results, since the method relies on the addition of chips for greater traction in solving harder problems in a reasonable amount of time.  

But merely adding more chips to a supercomputer "orchestra" can make the orchestra unwieldy, the conductor's job more difficult and, in the end, impossible.

In addition to such programming difficulties, massive arrays of processors generate excess heat that wastes energy and increase the chances some will fail. Designing convenient locations to store data so it's immediately available to processors is another problem.

The conundrum is, in short, that an exascale computer using current technologies could have the unwanted complexity of a Rube Goldberg contraption that uses the energy of a small city and demands round-the-clock upkeep.

To reduce these problems and start researchers on the road to solutions, the multi-institution XPRESS effort will address specific factors known to degrade fast performance. These include "starvation," the insufficiency of concurrent partial problem-solving at particular processing locations. This hinders both efficiency and scalability because it can require more parallelism. Information delays, known as latency effects, need to be reduced through a combination of better locality management, reduction of superfluous messaging and the hiding of information unnecessary to the problem. Overhead limits the interpretation of granularities that can be effectively unearthed through inference. This reduces scalability. Waiting—because the same memory is needed by several processors—also causes slowdowns.

Explore further: 1 million trillion 'flops' per second targeted by new Institute for Advanced Architectures

Related Stories

Intel flirts with exascale leap in supercomputing

June 19, 2012

( -- If exascale range is the next destination post in high-performance computing then Intel has a safe ticket to ride. Intel says its new Xeon Phi line of chips is an early stepping stone toward exascale. Intel ...

Customizing supercomputers from the ground up

May 27, 2010

( -- Computer scientist Adolfy Hoisie has joined the Department of Energy's Pacific Northwest National Laboratory to lead PNNL's high performance computing activities. In one such activity, Hoisie will direct ...

Research team 'virtualizes' supercomputer

January 20, 2010

A collaboration between researchers at Northwestern University, Sandia National Labs and the University of New Mexico has resulted in the largest-scale study ever done on what many consider an important part of the future ...

Recommended for you

Volvo to supply Uber with self-driving cars (Update)

November 20, 2017

Swedish carmaker Volvo Cars said Monday it has signed an agreement to supply "tens of thousands" of self-driving cars to Uber, as the ride-sharing company battles a number of different controversies.

New method analyzes corn kernel characteristics

November 17, 2017

An ear of corn averages about 800 kernels. A traditional field method to estimate the number of kernels on the ear is to manually count the number of rows and multiply by the number of kernels in one length of the ear. With ...


Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.