IBM's new architecture can double analytics processing speed

November 19, 2010

At the Supercomputing 2010 conference, IBM today unveiled details of a new storage architecture design, created by IBM scientists, that will convert terabytes of pure information into actionable insights twice as fast as previously possible.

Ideally suited for cloud computing applications and data-intensive workloads such as digital media, and financial analytics, this new architecture will shave hours off of complex computations without requiring heavy infrastructure investment. IBM won the Challenge competition for presenting the most innovative and effective design in high performance computing with the best measurements of performance, scalability and storage subsystem utilization.

Running analytics applications on extremely large data sets is becoming increasingly important, but organizations can only continue to increase the size of their storage facilities so much. As businesses search for ways to harness their large stored data to achieve new levels of business insight, they need alternative solutions like cloud computing to keep up with growing data requirements as well as tackling workload flexibility through the rapid provisioning of system resources for different types of workloads.

"Businesses are literally running into walls, unable to keep up with the vast amounts of data generated on a daily basis," said Prasenjit Sarkar, Master Inventor, Storage Analytics and Resiliency, IBM Research – Almaden. "We constantly research and develop the industry's most advanced storage technologies to solve the world's biggest data problems. This new way of storage partitioning is another step forward on this path as it gives businesses faster time-to-insight without concern for traditional storage limitations."

Created at IBM Research – Almaden, the new General Parallel File System-Shared Nothing Cluster (GPFS-SNC) architecture is designed to provide higher availability through advanced clustering technologies, dynamic file system management and advanced data replication techniques. By "sharing nothing," new levels of availability, performance and scaling are achievable. GPFS-SNC is a distributed computing architecture in which each node is self-sufficient; tasks are then divided up between these independent computers and no one waits on the other.

IBM's current GPFS technology offering is the core technology for IBM's High Performance Computing Systems, IBM's Information Archive, IBM Scale-Out NAS (SONAS), and the IBM Smart Business Compute Cloud. These research lab innovations enable future expansion of those offerings to further tackle tough big data problems.

For instance, large financial institutions run complex algorithms to analyze risk based on petabytes of data. With billions of files spread across multiple computing platforms and stored across the world, these mission-critical calculations require significant IT resource and cost because of their complexity. Using this GPFS-SNC design, running this complex analytics workload could become much more efficient, as the design provides a common file system and namespace across disparate computing platforms, streamlining the process and reducing disk space.

Explore further: IBM Shatters Midrange Price-Performance Barrier With Ultra-Powerful Storage Server

Related Stories

IBM Storage Services Maintains Worldwide Lead in Market Share

July 21, 2005

IBM today announced that it continues to lead in the 2004 storage services marketplace based on revenue throughout Americas, EMEA and Asia Pacific and Japan, according to a recent Gartner annual report on the worldwide storage ...

IBM Unveils Revolutionary Cell Broadband Engine Computer

February 8, 2006

At a press conference in New York today, IBM introduced a blade computing system based on the Cell Broadband Engine (Cell BE). The IBM branded Cell BE-based system is designed for businesses that need the dense computing ...

IBM Extends Deep Computing on Demand Offering

June 21, 2007

IBM today expanded its Deep Computing Capacity on Demand (DCCoD) solutions. In a collaboration with Intel, IBM plans to offer the latest Dual-Core and Quad-Core Intel Xeon processor technology on its System x servers for ...

IBM makes Big Blue cloud

November 16, 2009

IBM on Monday announced it has created the world's largest business computing "cloud" capable of holding an amount of digital data on a par with 250 billion iTunes songs.

Recommended for you

For these 'cyborgs', keys are so yesterday

September 4, 2015

Punching in security codes to deactivate the alarm at his store became a thing of the past for Jowan Oesterlund when he implanted a chip into his hand about 18 months ago.

How to curb emissions? Put a price on carbon

September 3, 2015

Literally putting a price on carbon pollution and other greenhouse gasses is the best approach for nurturing the rapid growth of renewable energy and reducing emissions.

0 comments

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.