New parallelization technique boosts our ability to model biological systems

Jun 09, 2011

Researchers at North Carolina State University have developed a new technique for using multi-core chips more efficiently, significantly enhancing a computer's ability to build computer models of biological systems. The technique improved the efficiency of algorithms used to build models of biological systems more than seven-fold, creating more realistic models that can account for uncertainty and biological variation. This could impact research areas ranging from drug development to the engineering of biofuels.

Computer models of biological systems have many uses, from predicting potential side-effects of to understanding the ability of plants to adjust to . But developing models for living things is challenging because, unlike machines, biological systems can have a significant amount of uncertainty and variation.

"When developing a model of a , you have to use techniques that account for that uncertainty, and those techniques require a lot of ," says Dr. Cranos Williams, an assistant professor of electrical engineering at NC State and co-author of a paper describing the research. "That means using . Those computers are expensive, and access to them can be limited.

"Our goal was to develop software that enables scientists to run biological models on conventional computers by utilizing their multi-core chips more efficiently."

The brain of a computer chip is its , or "core." Most personal computers now use chips that have between four and eight cores. However, most programs only operate in one core at a time. For a program to utilize all of these cores, it has to be broken down into separate "threads" – so that each core can execute a different part of the program simultaneously. The process of breaking down a program into threads is called parallelization, and allows computers to run programs very quickly.

In order to "parallelize" algorithms for building models of biological systems, Williams' research team created a way for information to pass back and forth between the cores on a single chip. Specifically, Williams explains, "we used threads to create 'locks' that control access to shared data. This allows all of the cores on the chip to work together to solve a unified problem."

The researchers tested the approach by running three models through chips that utilized one core, as well as chips that used the new technique to utilize two, four and eight cores. In all three models, the chip that utilized eight cores ran at least 7.5 times faster than the chip that utilized only one core.

"This approach allows us to build complex models that better reflect the true characteristics of the biological process, and do it in a more computationally efficient way," says Williams. "This is important. In order to understand , we will need to use increasingly complex models to address the uncertainty and variation inherent in those systems."

Ultimately, Williams and his team hope to see if this approach can be scaled up for use on supercomputers, and whether it can be modified to take advantage of the many cores that are available on graphics processing units used in many machines.

Explore further: Communication-optimal algorithms for contracting distributed tensors

More information: The paper: "Parameter Estimation In Biological Systems Using Interval Methods With Parallel Processing" The paper was presented at the Workshop on Computational Systems Biology in Zurich, Switzerland, June 6-8.

Related Stories

New software design technique allows programs to run faster

Apr 05, 2010

(PhysOrg.com) -- Researchers at North Carolina State University have developed a new approach to software development that will allow common computer programs to run up to 20 percent faster and possibly incorporate new security ...

New hardware boosts communication speed on multi-core chips

Jan 31, 2011

Computer engineers at North Carolina State University have developed hardware that allows programs to operate more efficiently by significantly boosting the speed at which the "cores" on a computer chip communicate with each ...

Mastering multicore

Apr 26, 2010

(PhysOrg.com) -- MIT researchers have developed software that makes computer simulations of physical systems run much more efficiently on so-called multicore chips. In experiments involving chips with 24 separate ...

AMD Planning 16-Core Server Chip For 2011 Release

Apr 27, 2009

(PhysOrg.com) -- AMD is in the process of designing a server chip with up to 16-cores. Code named Interlagos, the server chip will contain between 12 and 16 cores and will be available in 2011.

Recommended for you

Designing exascale computers

Jul 23, 2014

"Imagine a heart surgeon operating to repair a blocked coronary artery. Someday soon, the surgeon might run a detailed computer simulation of blood flowing through the patient's arteries, showing how millions ...

User comments : 2

Adjust slider to filter visible comments by rank

Display comments: newest first

SincerelyTwo
5 / 5 (2) Jun 09, 2011
Mutex exclusion is very common, and used widely. Not only that but most clever programmers utilize GPU's to accomplish performance boosts of at least an order of a magnitude. Granted in some cases that doesn't work out that well, not all algorithms can very well be adapted to parallelization on a GPU. However, Sony already invented the CELL processor, which helps bridge that gap, and quite effectively at that.

As far as I can tell these programmers have not actually accomplished anything that doesn't already widely exist or is widely known. What exactly makes their solution so special?

I'll give them the benefit of the doubt and criticize who ever wrote this article; the content here is complete garbage.
gmurphy
not rated yet Jun 09, 2011
@SincerelyTwo, the abstract is given here: http://news.ncsu....rallel/. The geist of the paper is about optimizing model parameter estimation by running it on parallel cores, I kid you not.