Billion inserts-per-second data milestone reached for supercomputing tool

Jun 02, 2014
Billion inserts-per-second data milestone reached for supercomputing tool.

(Phys.org) —At Los Alamos, a supercomputer epicenter where "big data set" really means something, a data middleware project has achieved a milestone for specialized information organization and storage. The Multi-dimensional Hashed Indexed Middleware (MDHIM) project at Los Alamos National Laboratory recently achieved 1,782,105,749 key/value inserts per second into a globally-ordered key space on Los Alamos National Laboratory's Moonlight supercomputer.

"In the current highly parallel computing world, the need for scalability has forced the world away from fully transactional databases and back to the loosened semantics of key value stores," says Gary Grider, High Performance Computing division leader at Los Alamos.

Computer simulations overall are scaling to higher parallel-processor counts, simulating finer physical scales or more complex physical interactions. As they do so, the simulations produce ever-larger data sets that must be analyzed to yield the insights scientists need.

"This milestone was achieved by a combination of good software design and refined algorithms. Our code is available on Github and we encourage others to build upon it," said Hugh Greenberg, project leader and lead developer of the MDHIM project.

Traditionally, much data analysis has been visual; data are turned into images or movies. Statistical analysis generally occurs over the entire data set. But more detailed analysis on entire data sets is becoming untenable due to the resources required to move/search/analyze all the data at once. The ability to identify, retrieve, and analyze smaller subsets of data within the multidimensional whole would make detailed analysis much more practical. In order to do achieve this, it becomes essential to find strategies for managing these multiple dimensions of simulation data.

The MDHIM project aims to create a middle-ground framework between fully relational databases and distributed but completely local constructs like "map/reduce." MDHIM allows applications to take advantage of the mechanisms provided by a parallel key-value store: storing data in global multi-dimensional order and sub-setting of massive data in multiple dimensions as well as the functions of a distributed hash table with simple but massively parallel lookups.

Records are sorted globally in whichever number of ways an application chooses. Applications can choose to implement, via the MDHIM library, anything from a shared-nothing map/reduce-style functionality to deeply indexed data with rich information about statistical distributions within all keys. This allows global and retrieval of relevant data subsets for further analysis.

MDHIM is designed to represent petabytes of data with mega- to gigabytes of representation data, utilizing the natural advantages of HPC interconnects—low latency, high bandwidth, and collective-friendliness—to scale key/value service to millions of cores, implying a need for billions of inserts per second.

In this sample scaling run, MDHIM ran as an MPI library on 3360 processors within 280 nodes of the 308-node Moonlight system in demonstrating nearly two billion inserts per second.

MDHIM is a framework on which an application can run thousands of copies of existing key value stores, in multiple programming environments, exploiting the capabilities of an extreme scale computing system. MDHIM, which is sponsored by the U.S. Department of Defense, is being used extensively within the Storage and I/O portion of the DOE FastForward project, which has as its objective "to initiate partnerships with multiple companies to accelerate the R&D of critical component technologies needed for extreme-scale computing."

Explore further: Multidimensional image processing and analysis in R

add to favorites email to friend print save as pdf

Related Stories

Multidimensional image processing and analysis in R

Jun 02, 2014

Many researchers believe that an esoteric, open-source programming language for statistical analysis—called R—could pave the way for open science. Today, thousands of international scientists are parti ...

Customizing supercomputers from the ground up

May 27, 2010

(PhysOrg.com) -- Computer scientist Adolfy Hoisie has joined the Department of Energy's Pacific Northwest National Laboratory to lead PNNL's high performance computing activities. In one such activity, Hoisie will direct ...

A toolbox to simulate the big bang and beyond

Oct 18, 2013

The universe is a vast and mysterious place, but thanks to high-performance computing technology scientists around the world are beginning to understand it better. They are using supercomputers to simulate ...

Better chemistry through parallel in time algorithms

Mar 17, 2014

Molecular dynamics simulations often take too long to be practical for simulating chemical processes that occur on long timescales. Scientists DOE's Pacific Northwest National Laboratory, the University of ...

Recommended for you

Coping with floods—of water and data

Dec 19, 2014

Halloween 2013 brought real terror to an Austin, Texas, neighborhood, when a flash flood killed four residents and damaged roughly 1,200 homes. Following torrential rains, Onion Creek swept over its banks and inundated the ...

Cloud computing helps make sense of cloud forests

Dec 17, 2014

The forests that surround Campos do Jordao are among the foggiest places on Earth. With a canopy shrouded in mist much of time, these are the renowned cloud forests of the Brazilian state of São Paulo. It is here that researchers ...

User comments : 0

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.