'Data motion metric' needed for supercomputer rankings, says SDSC's Snavely

Aug 10, 2011

As we enter the era of data-intensive research and supercomputing, the world's top computer systems should not be ranked on calculation speed alone, according to Allan Snavely, associate director of the San Diego Supercomputer Center (SDSC) at the University of California, San Diego.

"I'd like to propose that we routinely compare machines using the metric of data motion capacity, or their ability to move data quickly," Snavely told attendees of the 'Get Ready for Gordon – Summer Institute' being held this week (August 8-11) at SDSC to familiarize potential users with the unique capabilities of SDSC's new Gordon data-intensive .

Gordon, the result of a five-year, $20 million award from the National Science Foundation (NSF), is the first high-performance supercomputer to use large amounts of flash-based SSD (solid state drive) memory. With about 300 trillion bytes of and 64 I/O nodes, Gordon will be capable of handling massive data bases while providing up to 100 times faster speeds when compared to hard drive disk systems for some queries. Flash memory is more common in smaller devices such as mobile phones and laptop computers, but unique for supercomputers, which generally use slower spinning disk technology.

The system is set to formally enter production on January 1, 2012, although pre-production allocations on some parts of the cluster will start as early as this month for U.S. academic researchers.

"This may be a somewhat heretical notion, but at SDSC we want a supercomputer to be data capable, not just FLOP/S capable," said Snavely, whom along with many other HPC experts now contend that supercomputers should also be measured by their overall ability to help researchers solve real-world science problems. Snavely's proposal includes a measurement that weights DRAM, flash memory, and disk capacity according to access time in a compute cycle.

A common term within the supercomputing community, peak speed means the fastest speed at which a supercomputer can calculate. It is typically measured in FLOP/S, which stands for FLoating point OPerations per Second. In lay terms, it basically means peak calculations per second. In June, a Japanese supercomputer capable of performing more than 8 quadrillion calculations per second (petaflop/s) was ranked the top system in the world, putting Japan back in the top spot for the first time since 2004, according the latest edition of the TOP500 List of the world's supercomputers. The system, called the K Computer, is at the RIKEN Advanced Institute for Computational Science (AICS) in Kobe, Japan, and replaced China's Tianhe-1A system as the fastest supercomputer in the rankings, which has been using this metric since 1993.

"Everyone says we are literally drowning in data, but here are some simple technical reasons," said Snavely. "The number of cycles for computers to access data is getting longer – in fact disks are getting slower all the time as their capacity goes up but access times stay the same. It now takes twice as long to examine a disk every year, or put another way, this doubling of capacity halves the accessibility to any random data on a given media.

"That's a pernicious outcome for Moore's Law," he said, noting that as the number of cycles for computers to access data gets longer, some large-scale systems are just "spending time twiddling their thumbs."

Explore further: Successful read/write of digital data in fused silica glass with high recording density

add to favorites email to friend print save as pdf

Related Stories

SDSC dashes forward with new flash memory computer system

Sep 02, 2009

Leveraging lightning-fast technology already familiar to many from the micro storage world of digital cameras, thumb drives and laptop computers, the San Diego Supercomputer Center (SDSC) at the University of California, ...

China boasts world's fastest supercomputer

Oct 28, 2010

China is set to trump the US to take the number one spot for the fastest supercomputer ever made in a survey of the world's zippiest machines, it was reported Thursday.

Recommended for you

Microsoft beefs up security protection in Windows 10

4 hours ago

What Microsoft users in business care deeply about—-a system architecture that supports efforts to get their work done efficiently; a work-centric menu to quickly access projects rather than weather readings ...

US official: Auto safety agency under review

17 hours ago

Transportation officials are reviewing the "safety culture" of the U.S. agency that oversees auto recalls, a senior Obama administration official said Friday. The National Highway Traffic Safety Administration has been criticized ...

Out-of-patience investors sell off Amazon

17 hours ago

Amazon has long acted like an ideal customer on its own website: a freewheeling big spender with no worries about balancing a checkbook. Investors confident in founder and CEO Jeff Bezos' invest-and-expand ...

Ebola.com domain sold for big payout

17 hours ago

The owners of the website Ebola.com have scored a big payday with the outbreak of the epidemic, selling the domain for more than $200,000 in cash and stock.

Hacker gets prison for cyberattack stealing $9.4M

21 hours ago

An Estonian man who pleaded guilty to orchestrating a 2008 cyberattack on a credit card processing company that enabled hackers to steal $9.4 million has been sentenced to 11 years in prison by a federal judge in Atlanta.

Magic Leap moves beyond older lines of VR

22 hours ago

Two messages from Magic Leap: Most of us know that a world with dragons and unicorns, elves and fairies is just a better world. The other message: Technology can be mindboggingly awesome. When the two ...

User comments : 1

Adjust slider to filter visible comments by rank

Display comments: newest first

jscroft
not rated yet Aug 11, 2011
"That's a pernicious outcome for Moore's Law," he said, noting that as the number of cycles for computers to access data gets longer, some large-scale systems are just "spending time twiddling their thumbs."


Maybe we could solve the problem by hiring engineers who are aware that supercomputers don't have thumbs.