Cycle Computing uses Amazon computing services to do work of supercomputer

Nov 13, 2013 by Bob Yirka weblog

(Phys.org) —Computer services company Cycle Computing has announced that it has used Amazon's servers to run software for a client that simulated the properties of 205,000 molecules over an 18 hour period using 156,000 Amazon cores to get the job done. The cost to the client, the University of Southern California, was $33,000.

Supercomputers are big, fast and extremely expensive. For that reason, researchers have begun to look for other ways to process huge amounts of data for less money. Rushing in to fill that void are companies that match clients with distributed such as those offered by Google, Microsoft or Amazon. Cycle Computing is one such company. In this latest endeavor, Mark Thompson, of USC wanted to find a faster way to crunch the mammoth amount of data needed to analyze that might be useful for creating photovoltaic cells—in the past, it was done by , one molecule at a time. More recently, software has been developed that can do the crunching—in this case, it was Schrödinger's Materials Science software suite. Unfortunately, crunching the data for a lot of molecules takes more computer resources than USC had to offer. That's where Cycle Computing came in—they were able to connect Thompson and his software with Amazon servers running all over the world—all at the same time. The result was an analysis of the suitability of 205,000 molecules in just 18 hours—a task that would have taken 264 years if run on a conventional computer.

The idea of using distributed server systems offered by big name companies hyping cloud services has become very enticing for big businesses looking to crunch massive amounts of data without having to fork over the huge amounts of cash normally associated with buying a supercomputer or renting time on one owned by someone else. And as with many business models, there has arisen a need for companies with expertise in connecting applications with such services—no small feat. To get the job done for USC, Cycle Computing had to secure the resources from Amazon, provide a pipeline between the client data and the Amazon servers and reallocate resources if there were outages—all while making sure the budget wasn't overrun. Cycle Computer managed the job using custom software it calls Jupiter. Company reps noted also that jobs such as the one they performed for USC are particularly suited for the type of server processing offered by cloud servers, noting that it was "pleasantly parallel"—the different parts of the project could be very easily broken into separate jobs and handled separately.

Explore further: IBM to invest $1b in Linux, open-source

Related Stories

IBM to invest $1b in Linux, open-source

Sep 17, 2013

IBM said Tuesday it would invest $1 billion in new Linux and open source technologies for its servers in a bid to boost efficiency for big data and cloud computing.

Invention lets companies choose greener cloud options

Nov 11, 2013

IBM inventors have patented a technique that enables cloud computing data center operators to dynamically redistribute workloads to lower-powered or underutilized systems, thereby minimizing the environmental footprint and ...

Instagram, other sites go down

Aug 26, 2013

Amazon's unit that runs Web servers for other companies had problems Sunday that coincided with outages or slowdowns on several popular websites.

Locking down the cloud

Nov 08, 2013

A software re-encryption system could allow users to pay for and run applications "in the cloud" without revealing their identity to the cloud host. The same approach would also allow the software providers to lock out malicious ...

Nebula One steps forth as world's first cloud computer

Apr 03, 2013

(Phys.org) —Nebula has announced its first product, Nebula One. The new entry is defined in a promotional video (with symphonic, celestial music and a British voiceover for gravitas) as the world's first ...

Recommended for you

Enabling a new future for cloud computing

2 hours ago

The National Science Foundation (NSF) today announced two $10 million projects to create cloud computing testbeds—to be called "Chameleon" and "CloudLab"—that will enable the academic research community ...

Hacking Gmail with 92 percent success

13 hours ago

(Phys.org) —A team of researchers, including an assistant professor at the University of California, Riverside Bourns College of Engineering, have identified a weakness believed to exist in Android, Windows ...

User comments : 1

Adjust slider to filter visible comments by rank

Display comments: newest first

baudrunner
not rated yet Nov 13, 2013
The same could hav been accomplished for free using a system similar to the one that SETI used to analyze radio telescope data. In short, personal computer users the world over ran a proprietary app downloaded from the SETI project site transparently whilst doing their own thing in the foreground. All that USC needs to do is to provide the app to computer users who offer their personal computers' resources voluntarily. That $33,000 could have been used to pay the salary of one more post-grad student doing valuable research.