New simulation speed record on Sequoia supercomputer

May 01, 2013
Lawrence Livermore scientists, from left, David Jefferson and Peter Barnes. Photo by Laura Schulz and Meg Epperly/LLNL

(Phys.org) —Computer scientists at Lawrence Livermore National Laboratory (LLNL) and Rensselaer Polytechnic Institute have set a high performance computing speed record that opens the way to the scientific exploration of complex planetary-scale systems.

In a paper to be published in May, the joint team will announce a record-breaking speed of 504 billion events per second on LLNL's Sequoia Blue Gene/Q supercomputer, dwarfing the previous record set in 2009 of 12.2 billion events per second.

Constructed by IBM, the 120 rack Sequoia supercomputer has a peak performance of 25 petaflops and is the second in the world, with a total speed and capacity equivalent to about one million desktop PCs. A petaflop is a quadrillion floating point operations per second.

In addition to breaking the record for computing speed, the research team set a record for the most highly parallel "discrete event simulation," with 7.86 million simultaneous tasks using 1.97 million cores. Discrete event simulations are used to model irregular systems with behavior that cannot be described by equations, such as communication networks, traffic flows, economic and ecological models, military combat scenarios, and many other complex systems.

Prior to the record-setting experiment, a preliminary scaling study was conducted at the Rensselaer , the Computational Center for Nanotechnology Innovation (CCNI). The researchers tuned parameters on the CCNI's two-rack Blue Gene/Q system and optimized the experiment to scale up and run on the 120-rack Sequoia system.

Authors of the study are Peter Barnes, Jr. and David Jefferson of LLNL, and CCNI Director and computer science professor Chris Carothers and graduate student Justin LaPre of Rensselaer.

The records were set using the ROSS (Rensselaer's Optimistic Simulation System) simulation package developed by Carothers and his students, and using the Time Warp synchronization algorithm originally developed by Jefferson.

"The significance of this demonstration is that direct simulation of 'planetary scale' models is now, in principle at least, within reach," said Jefferson.

"Planetary scale" in the context of the joint team's work means simulations large enough to represent all 7 billion people in the world or the entire Internet's few billion hosts.

"This is an exciting time to be working in high-performance computing, as we explore the petascale and move aggressively toward exascale computing," Carothers said. "We are reaching an interesting transition point where our simulation capability is limited more by our ability to develop, maintain and validate models of complex systems than by our ability to execute them in a timely manner."

The calculations were completed while Sequoia was in unclassified 'early science' service as part of the machine's integration period. The system is now in classified service.

Explore further: Coping with floods—of water and data

Related Stories

Predictive simulation successes on Dawn supercomputer

Sep 30, 2009

(PhysOrg.com) -- The 500-teraFLOPS Advanced Simulation and Computing program's Sequoia Initial Delivery System (Dawn), an IBM machine of the same lineage as BlueGene/L, has immediately proved itself useful ...

IBM To Build Supercomputer For U.S. Government

Feb 03, 2009

(PhysOrg.com) -- The U.S. Government has contracted out IBM to build a massive supercomputer bigger than any supercomputer out there. The supercomputer system, called Sequoia, will be capable of delivering ...

Researchers break million-core supercomputer barrier

Jan 28, 2013

Stanford Engineering's Center for Turbulence Research (CTR) has set a new record in computational science by successfully using a supercomputer with more than one million computing cores to solve a complex ...

Sequoia supercomputer transitions to classified work

Apr 18, 2013

The National Nuclear Security Administration (NNSA) today announced that its Sequoia supercomputer at Lawrence Livermore National Laboratory (LLNL) has completed its transition to classified computing in ...

Recommended for you

Coping with floods—of water and data

15 hours ago

Halloween 2013 brought real terror to an Austin, Texas, neighborhood, when a flash flood killed four residents and damaged roughly 1,200 homes. Following torrential rains, Onion Creek swept over its banks and inundated the ...

Cloud computing helps make sense of cloud forests

Dec 17, 2014

The forests that surround Campos do Jordao are among the foggiest places on Earth. With a canopy shrouded in mist much of time, these are the renowned cloud forests of the Brazilian state of São Paulo. It is here that researchers ...

User comments : 0

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.