Billion inserts-per-second data milestone reached for supercomputing tool

June 2, 2014
Billion inserts-per-second data milestone reached for supercomputing tool.

( —At Los Alamos, a supercomputer epicenter where "big data set" really means something, a data middleware project has achieved a milestone for specialized information organization and storage. The Multi-dimensional Hashed Indexed Middleware (MDHIM) project at Los Alamos National Laboratory recently achieved 1,782,105,749 key/value inserts per second into a globally-ordered key space on Los Alamos National Laboratory's Moonlight supercomputer.

"In the current highly parallel computing world, the need for scalability has forced the world away from fully transactional databases and back to the loosened semantics of key value stores," says Gary Grider, High Performance Computing division leader at Los Alamos.

Computer simulations overall are scaling to higher parallel-processor counts, simulating finer physical scales or more complex physical interactions. As they do so, the simulations produce ever-larger data sets that must be analyzed to yield the insights scientists need.

"This milestone was achieved by a combination of good software design and refined algorithms. Our code is available on Github and we encourage others to build upon it," said Hugh Greenberg, project leader and lead developer of the MDHIM project.

Traditionally, much data analysis has been visual; data are turned into images or movies. Statistical analysis generally occurs over the entire data set. But more detailed analysis on entire data sets is becoming untenable due to the resources required to move/search/analyze all the data at once. The ability to identify, retrieve, and analyze smaller subsets of data within the multidimensional whole would make detailed analysis much more practical. In order to do achieve this, it becomes essential to find strategies for managing these multiple dimensions of simulation data.

The MDHIM project aims to create a middle-ground framework between fully relational databases and distributed but completely local constructs like "map/reduce." MDHIM allows applications to take advantage of the mechanisms provided by a parallel key-value store: storing data in global multi-dimensional order and sub-setting of massive data in multiple dimensions as well as the functions of a distributed hash table with simple but massively parallel lookups.

Records are sorted globally in whichever number of ways an application chooses. Applications can choose to implement, via the MDHIM library, anything from a shared-nothing map/reduce-style functionality to deeply indexed data with rich information about statistical distributions within all keys. This allows global and retrieval of relevant data subsets for further analysis.

MDHIM is designed to represent petabytes of data with mega- to gigabytes of representation data, utilizing the natural advantages of HPC interconnects—low latency, high bandwidth, and collective-friendliness—to scale key/value service to millions of cores, implying a need for billions of inserts per second.

In this sample scaling run, MDHIM ran as an MPI library on 3360 processors within 280 nodes of the 308-node Moonlight system in demonstrating nearly two billion inserts per second.

MDHIM is a framework on which an application can run thousands of copies of existing key value stores, in multiple programming environments, exploiting the capabilities of an extreme scale computing system. MDHIM, which is sponsored by the U.S. Department of Defense, is being used extensively within the Storage and I/O portion of the DOE FastForward project, which has as its objective "to initiate partnerships with multiple companies to accelerate the R&D of critical component technologies needed for extreme-scale computing."

Explore further: Multidimensional image processing and analysis in R

Related Stories

Multidimensional image processing and analysis in R

June 2, 2014

Many researchers believe that an esoteric, open-source programming language for statistical analysis—called R—could pave the way for open science. Today, thousands of international scientists are participating in the ...

Customizing supercomputers from the ground up

May 27, 2010

( -- Computer scientist Adolfy Hoisie has joined the Department of Energy's Pacific Northwest National Laboratory to lead PNNL's high performance computing activities. In one such activity, Hoisie will direct ...

A toolbox to simulate the big bang and beyond

October 18, 2013

The universe is a vast and mysterious place, but thanks to high-performance computing technology scientists around the world are beginning to understand it better. They are using supercomputers to simulate how the Big Bang ...

Better chemistry through parallel in time algorithms

March 17, 2014

Molecular dynamics simulations often take too long to be practical for simulating chemical processes that occur on long timescales. Scientists DOE's Pacific Northwest National Laboratory, the University of Chicago, and the ...

Recommended for you

Making AI systems that see the world as humans do

January 19, 2017

A Northwestern University team developed a new computational model that performs at human levels on a standard intelligence test. This work is an important step toward making artificial intelligence systems that see and understand ...

Firms push hydrogen as top green energy source

January 18, 2017

Over a dozen leading European and Asian firms have teamed up to promote the use of hydrogen as a clean fuel and cut the production of harmful gasses that lead to global warming.

WhatsApp vulnerable to snooping: report

January 13, 2017

The Facebook-owned mobile messaging service WhatsApp is vulnerable to interception, the Guardian newspaper reported on Friday, sparking concern over an app advertised as putting an emphasis on privacy.


Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.