Improved ranking test for supercomputers to be released by Sandia

Nov 18, 2013

Sandia National Laboratories researcher Mike Heroux has helped craft a new benchmark that more accurately measures the power of modern supercomputers for scientific and engineering applications. Heroux collaborated with the creator of the widely used LINPACK benchmark, Jack Dongarra, and his colleagues at the University of Tennessee and Oak Ridge National Laboratory.

The LINPACK TOP 500 test, devised by Dongarra's team, for decades has certified which new machines were among the 500 fastest in the world.

The new —a relatively small program called a High Performance Conjugate Gradient (HPCG)—is undergoing field tests on several National Nuclear Security Administration (NNSA) supercomputers. It will be formally released this month in Denver at SC13, an annual supercomputing conference, at the TOP500 Birds-of-a-Feather meeting, from 7:30-9:30 p.m. Tuesday at the Convention Center's Mile-High Ballroom.

"Supercomputer designers have known for quite a few years that LINPACK was not a good performance proxy for many complex modern applications," said Heroux.

NNSA's Office of Advanced Simulation and Computing (ASC) is funding the HPCG work because it wants a more meaningful measure that reflects how increasingly complex scientific and engineering codes would perform on upcoming supercomputing architectures, said Heroux.

LINPACK's influential semi-annual TOP500 listing of the 500 fastest machines has been the global benchmark for more than 25 years, initially because even nonexperts considered it a simple and accurate metric.

"The TOP500 was and continues to be the best advertising supercomputing gets," Heroux said. "Twice a year, when the new rankings come out, we get articles in media around the world. My 6-year-old can appreciate what it means."

In the early years of supercomputing, applications and problems were simpler and better matched the algorithms and data structures used in the LINPACK benchmark. Since then, applications and problems have become much more complex, demanding a broader collection of capabilities from computer systems. Thus, the gap between LINPACK performance and performance in real applications has grown dramatically in recent years.

"LINPACK specifications are like telling race car designers to build the fastest car for a completely flat, open terrain," Heroux said. "In that setting, the car has to satisfy only a single design goal. It does not need brakes, a steering wheel or other control features, making it impractical for real driving situations. It still gets there when needed, but is it the best for today?"

Additionally, computer designers have built systems with many arithmetic units but very weak data networks and primitive execution models.

"The extra arithmetic units are useless," Heroux said, "because modern applications cannot use them without better access to data and more flexible execution models."

He developed the new benchmark by starting with a teaching code he wrote to instruct students and junior staff members on developing parallel applications. This code later became the first "miniapp" in Mantevo, a project that recently won a 2013 R&D 100 Award.

The technical challenge of HPCG is to develop a very small program that captures as much of the essential performance of a large application as possible without making it too complicated. "We created a program with only 4,000 lines that behaves a lot like a real code of 1 million lines but is much simpler," Heroux said. "If we run HPCG on a simulator or new system and modify the code or computer design so that the code runs faster, we can make the same changes to make the real code run faster. The beauty of the approach is that it really works."

HPCG generates a large collection of algebraic equations that must be satisfied simultaneously. The conjugate gradient algorithm used in HPCG to solve these equations is an iterative method, which hones closer to the solution by repeated trials. It's the simplest practical method of its kind, so it is both a real algorithm that people need and not too complicated to implement.

It also uses data structures that more closely match real applications. LINPACK's data structures are no longer used for large problems in real applications because they require storing many zero values, which worked when application problems and computer memory sizes were much smaller. Today's problems are so large that data structures must pay attention to what is zero and what is not zero, which HPCG does.

For example, in a simulation of a car's structural integrity, the terms represent how points on the frame of a vehicle directly interact with each other. Most points have no direct interaction with each other, so most terms are zero. LINPACK data structures would require storing all terms, while HPCG only stores the nonzero terms.

"By providing a new benchmark, we hope system designers will build hardware and software that will run faster for our very large and complicated problems," Heroux said.

The HPCG code tests science and engineering problems involving complex equations, and is not related to another Sandia-led benchmark code known as Graph 500, which assesses and ranks the capabilities of supercomputers involved in so-called "big data" problems that search for relationships through graphs.

Explore further: A social-network illusion that makes things appear more popular than they are

Related Stories

Graph500 adds new measurement of supercomputing performance

Jun 26, 2012

( -- Supercomputing performance is getting a new measurement with the Graph500 executive committee’s announcement of specifications for a more representative way to rate the large-scale data analytics at the ...

Los Alamos Supercomputer Remains Fastest in World

Nov 18, 2008

The latest list of the TOP500 computers in the world has been announced at the SC08 supercomputing conference in Austin, Texas, and continued to place the Roadrunner supercomputer at Los Alamos National Laboratory as fastest ...

NERSC supercomputing center breaks the petaflops barrier

Nov 16, 2010

The Department of Energy's National Energy Research Scientific Computing Center (NERSC), already one of the world's leading centers for scientific productivity, is now home to the fifth most powerful supercomputer ...

Fujitsu, JAEA Unveil Japan's Fastest Supercomputer

Mar 03, 2010

Fujitsu today announced that it has completed joint development of a new supercomputer system with the Japan Atomic Energy Agency (JAEA). The new supercomputer system went operational today.

Recommended for you

EU open source software project receives green light

Jul 01, 2015

An open source software project involving the University of Southampton to extend the capacity of computational mathematics and interactive computing environments has received over seven million euros in EU funding.

Can computers be creative?

Jul 01, 2015

The EU-funded 'What-if Machine' (WHIM) project not only generates fictional storylines but also judges their potential usefulness and appeal. It represents a major advance in the field of computational creativity.

Algorithm detects nudity in images, offers demo page

Jul 01, 2015

An algorithm has been designed to tell if somebody in a color photo is naked. launched earlier this month; its demo page invites you to try it out to test its power in nudity detection. You ...

User comments : 0

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.