Computational Science Programming Model Crosses the Petaflop Barrier

Feb 12, 2010
Global Arrays are distributed dense arrays that can be accessed through a shared memory-like style.

(PhysOrg.com) -- Researchers at Pacific Northwest National Laboratory and Oak Ridge National Laboratory have demonstrated that the PNNL-developed Global Arrays computational programming model can perform at the petascale level. The demonstration performed at 1.3 petaflops-or 1.3 quadrillion numerical operations per second—using over 200,000 processors. This represents about 50% of the processors' peak theoretical capacity. Global Arrays is one of only two parallel programming models that have achieved this level of performance.

The Global Arrays technology was used in a computational chemistry simulation that was presented during the annual International Conference on High-Performance Computing, Networking, Storage and Analysis in Portland, Oregon in November. The conference is sponsored by the Association for Computing Machinery and the Institute of Electrical and Electronics Engineers. The paper describing the simulation was a finalist for the Gordon-Bell prize that recognizes outstanding achievement in high-performance computing applications.

Why it matters: Global Arrays enables researchers to more efficiently access global data, run bigger models, and simulate larger systems, resulting in a better understanding of the data and processes being evaluated.

For example, the data used in this demonstration focused on water modeling. Water is essential in numerous key chemical and biological processes, and accurate models are critical to understanding, controlling, and predicting the physical and chemical properties of complex aqueous systems.

The computational chemistry simulation performed using Global Arrays provided researchers with more accurate data pertaining to research on the properties of water at the molecular level as well as its interactions with molecules and its behavior at interfaces.

Methods: Scientific data is stored in the memory of computer nodes. The processor in the node can only access the data in its own memory, while most analysis and research depends on the ability to access and use data stored in multiple nodes. Standard programming models require coordination between processes to send and receive data .

Global Arrays allows researchers to access data directly from the memory of another node without requiring interaction from the remote processor—the process can send or receive data to or from another process with no coordination in advance.

Explore further: New frontier in error-correcting codes

More information: See the Global Arrays Toolkit website.

Related Stories

Recommended for you

New frontier in error-correcting codes

20 hours ago

Error-correcting codes are one of the glories of the information age: They're what guarantee the flawless transmission of digital information over the airwaves or through copper wire, even in the presence ...

Five ways the superintelligence revolution might happen

Sep 26, 2014

Biological brains are unlikely to be the final stage of intelligence. Machines already have superhuman strength, speed and stamina – and one day they will have superhuman intelligence. This is of course ...

User comments : 1

Adjust slider to filter visible comments by rank

Display comments: newest first

Quantum_Conundrum
Feb 12, 2010
This comment has been removed by a moderator.
jj2009
Feb 12, 2010
This comment has been removed by a moderator.
Husky
not rated yet Feb 13, 2010
That looks like software implementation of Fermi GPU nodes, makws you wonder what a rack of these cards would do