New simulation speed record on Sequoia supercomputer

May 01, 2013
Lawrence Livermore scientists, from left, David Jefferson and Peter Barnes. Photo by Laura Schulz and Meg Epperly/LLNL

(Phys.org) —Computer scientists at Lawrence Livermore National Laboratory (LLNL) and Rensselaer Polytechnic Institute have set a high performance computing speed record that opens the way to the scientific exploration of complex planetary-scale systems.

In a paper to be published in May, the joint team will announce a record-breaking speed of 504 billion events per second on LLNL's Sequoia Blue Gene/Q supercomputer, dwarfing the previous record set in 2009 of 12.2 billion events per second.

Constructed by IBM, the 120 rack Sequoia supercomputer has a peak performance of 25 petaflops and is the second in the world, with a total speed and capacity equivalent to about one million desktop PCs. A petaflop is a quadrillion floating point operations per second.

In addition to breaking the record for computing speed, the research team set a record for the most highly parallel "discrete event simulation," with 7.86 million simultaneous tasks using 1.97 million cores. Discrete event simulations are used to model irregular systems with behavior that cannot be described by equations, such as communication networks, traffic flows, economic and ecological models, military combat scenarios, and many other complex systems.

Prior to the record-setting experiment, a preliminary scaling study was conducted at the Rensselaer , the Computational Center for Nanotechnology Innovation (CCNI). The researchers tuned parameters on the CCNI's two-rack Blue Gene/Q system and optimized the experiment to scale up and run on the 120-rack Sequoia system.

Authors of the study are Peter Barnes, Jr. and David Jefferson of LLNL, and CCNI Director and computer science professor Chris Carothers and graduate student Justin LaPre of Rensselaer.

The records were set using the ROSS (Rensselaer's Optimistic Simulation System) simulation package developed by Carothers and his students, and using the Time Warp synchronization algorithm originally developed by Jefferson.

"The significance of this demonstration is that direct simulation of 'planetary scale' models is now, in principle at least, within reach," said Jefferson.

"Planetary scale" in the context of the joint team's work means simulations large enough to represent all 7 billion people in the world or the entire Internet's few billion hosts.

"This is an exciting time to be working in high-performance computing, as we explore the petascale and move aggressively toward exascale computing," Carothers said. "We are reaching an interesting transition point where our simulation capability is limited more by our ability to develop, maintain and validate models of complex systems than by our ability to execute them in a timely manner."

The calculations were completed while Sequoia was in unclassified 'early science' service as part of the machine's integration period. The system is now in classified service.

Explore further: MIT groups develop smartphone system THAW that allows for direct interaction between devices

Related Stories

Predictive simulation successes on Dawn supercomputer

Sep 30, 2009

(PhysOrg.com) -- The 500-teraFLOPS Advanced Simulation and Computing program's Sequoia Initial Delivery System (Dawn), an IBM machine of the same lineage as BlueGene/L, has immediately proved itself useful ...

IBM To Build Supercomputer For U.S. Government

Feb 03, 2009

(PhysOrg.com) -- The U.S. Government has contracted out IBM to build a massive supercomputer bigger than any supercomputer out there. The supercomputer system, called Sequoia, will be capable of delivering ...

Researchers break million-core supercomputer barrier

Jan 28, 2013

Stanford Engineering's Center for Turbulence Research (CTR) has set a new record in computational science by successfully using a supercomputer with more than one million computing cores to solve a complex ...

Sequoia supercomputer transitions to classified work

Apr 18, 2013

The National Nuclear Security Administration (NNSA) today announced that its Sequoia supercomputer at Lawrence Livermore National Laboratory (LLNL) has completed its transition to classified computing in ...

Recommended for you

Who drives Alibaba's Taobao traffic—buyers or sellers?

Sep 18, 2014

As Chinese e-commerce firm Alibaba prepares for what could be the biggest IPO in history, University of Michigan professor Puneet Manchanda dug into its Taobao website data to help solve a lingering chicken-and-egg question.

Computerized emotion detector

Sep 16, 2014

Face recognition software measures various parameters in a mug shot, such as the distance between the person's eyes, the height from lip to top of their nose and various other metrics and then compares it with photos of people ...

Cutting the cloud computing carbon cost

Sep 12, 2014

Cloud computing involves displacing data storage and processing from the user's computer on to remote servers. It can provide users with more storage space and computing power that they can then access from anywhere in the ...

Teaching computers the nuances of human conversation

Sep 12, 2014

Computer scientists have successfully developed programs to recognize spoken language, as in automated phone systems that respond to voice prompts and voice-activated assistants like Apple's Siri.

User comments : 0