Cloud computing brings cost of protein research down to Earth

April 10, 2009
A new computer software program that translates 00complex chemical names into various languages could help speed drug development worldwide, scientists report. Credit: The American Chemical Society

Researchers at the Medical College of Wisconsin Biotechnology and Bioengineering Center in Milwaukee have just made the very expensive and promising area of protein research more accessible to scientists worldwide.

They have developed a set of free tools called ViPDAC (virtual proteomics data analysis cluster), to be used in combination with Amazon's inexpensive "cloud computing" service, which provides the option to rent processing time on its powerful servers; and free from the National Institutes of Health (NIH) and the University of Manitoba.

Their research appears online in Journal of Proteomic Research and is funded by the NIH Heart Lung and Blood Institute's Proteomics Innovation Center at the Medical College. Proteomics is a biomedical research term used to describe the large-scale study of all the proteins expressed by an organism. It usually involves the identification of proteins and determination of their modifications in both normal and disease states.

One of the major challenges for many laboratories setting up proteomics programs has been obtaining and maintaining the very costly computational infrastructure required for analysis of the vast flow of proteomics data generated by mass spectrometry instruments used to determine the elemental composition as well as chemical structure of a molecule, according to senior investigator, Simon Twigger, Ph.D., assistant professor of physiology.

"We're applying this technology in our Proteomics Center to study cardiovascular disease, the effects of radiation damage, and in our collaboration with the University of Wisconsin- Madison group," he says.

With cloud computing making the analysis less expensive and more accessible, many more users can set up and customize their own systems. Investigators can analyze their data in greater depth than previously possible, making it possible for them to learn more about the systems they are studying.

"The tools we have produced allow anyone with a credit card, anywhere in the world, to analyze proteomics data in the cloud and reap the benefits of having significant computing resources to speed up their data analysis," says lead author Brian Halligan, Ph.D., research scientist in the Biotechnology and Bioengineering Center.

"For researchers currently without access to large computer resources, this greatly increases the options to analyze their data. They can now undertake more complex analyses or try different approaches that were simply not feasible for them before."

Until recently, the standard software programs used for proteomics data analysis were almost exclusively commercial, proprietary and expensive. Fees for commercial applications typically rivaled or exceeded the cost of the hardware to run them.

In 2004, a group from the NIH developed and distributed an open-source alternative to commercial proteomics search programs, entitled Open Mass Spectrometry Algorithm (OMSSA). A second open-source proteomics database search is also now available; the X!Tandem, developed and released by the Bevis Laboratory at the University of Manitoba.

A link on the College's Proteomics Center website http://proteomics.mcw.edu/vipdac provides detailed step-by-step instructions on how to implement the virtual proteomics analysis clusters, as well as a list of current available preconfigured Amazon machine images containing the OMSSA and X!Tandem search algorithms and sequence databases.

"We describe a system that combines distributed-on-demand and open source software to allow laboratories to set up scalable virtual proteomics analysis clusters without a huge investment in computational hardware or software licensing fees," says Dr. Halligan.

"The pricing structure of distributed computing providers such as Amazon Web Services allows laboratories, or even individuals, to have large-scale computational resources at their disposal at very low cost per run."

Source: Medical College of Wisconsin (news : web)

Explore further: Minimum information standards -- all for 1 and 1 for all

Related Stories

Minimum information standards -- all for 1 and 1 for all

August 26, 2007

Three papers published by EMBL scientists and their collaborators will make it much easier to share and compare information from large-scale proteomics data. The papers are published in Nature Biotechnology on 8th and 26th ...

IBM to Build First Cloud Computing Center in China

February 1, 2008

IBM today announced it will establish the first Cloud Computing Center for software companies in China, which will be situated at the new Wuxi Tai Hu New Town Science and Education Industrial Park in Wuxi, China

Recommended for you

How the finch changes its tune

August 3, 2015

Like top musicians, songbirds train from a young age to weed out errors and trim variability from their songs, ultimately becoming consistent and reliable performers. But as with human musicians, even the best are not machines. ...

Machine Translates Thoughts into Speech in Real Time

December 21, 2009

(PhysOrg.com) -- By implanting an electrode into the brain of a person with locked-in syndrome, scientists have demonstrated how to wirelessly transmit neural signals to a speech synthesizer. The "thought-to-speech" process ...

0 comments

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.