Professor works to overcome challenges in harnessing power of multicore computer processors

Jan 15, 2013 by Karen B. Roberts
UD's John Cavazos is working to overcome challenges in harnessing the power of multicore computer processors. Credit: Lane McLaughlin

(Phys.org)—Computer processors that can complete multiple tasks simultaneously have been available in the mainstream for almost a decade. In fact, almost all processors developed today are multicore processors. Yet, computer programmers still struggle to efficiently harness their power because it is difficult to write correct and efficient parallel code.

According to the University of Delaware's John Cavazos, to effectively exploit the power of multi-core processors, programs must be structured as a collection of independent tasks where separate tasks are executed on independent cores.

The complexity of modern software, however, makes this difficult.

Now Cavazos, assistant professor of computer and , is attempting to invent new algorithms and tools for parallelization of large-scale programs as principal investigator (PI) of a new National Science Foundation (NSF) grant.

The work, funded through a $497,791 grant from the Division of Computer and Communication Foundations, is a with Michael Spear, assistant professor of computer science and engineering at Lehigh University. It involves using a novel combination of automatic and profile-driven techniques to address fundamental issues in creating parallel programs.

Compilers translate applications written by into machine code that executes on a computer. One of their important tasks is to help applications to run more efficiently through parallelization. However, while a variety of parallelization techniques are available, such as automatic parallelization and speculative parallelization, no one technique is applicable to all programs.

Cavazos will use machine learning to create a system that enables a compiler to analyze a program, assess which parallelization technique is suitable, and then automatically apply that technique to the program. He is also investigating how to enable the program, through learning, to adapt its behavior based on outside inputs and environments.

The project extends research he began in 2010, when he received an NSF Faculty Early Career Development Award to develop adaptive compilers for multi-core computer environments. It also complements his recent work with the Defense Advanced Research Projects Agency to construct an extreme-scale software framework capable of automatically partitioning and mapping application code to a multi-core system and generating "SMART" applications that can reconfigure underlying hardware to save power.

"This research will enable a greater percentage of programs to benefit from by providing feedback to programmers so that they can improve the code, and by integrating adaptability so that a broader range of programs can achieve increased speed," Cavazos said.

As the project progresses, Cavazos plans to develop new algorithms and tools for speculative parallelization, a technique that allows shared-memory systems to execute certain loops in parallel; loops that a compiler cannot otherwise determine as parallelizable. Ultimately, he hopes to distribute the new prototypes and source codes as open-source software.

A key educational initiative includes training graduate students and integrating performance-related topics into his classroom instruction.

"Helping students to learn the fundamentals of creating that is both correct and efficient needs to be a major educational goal in our science departments," said Cavazos.

Explore further: Coping with floods—of water and data

add to favorites email to friend print save as pdf

Related Stories

KIT: Fast and easy programming

Feb 01, 2012

An increasing number of electronics products such as smartphones are equipped with fast, energy-efficient multi-core processors. As a matter of fact, however, programming of the respective applications is ...

New software design technique allows programs to run faster

Apr 05, 2010

(PhysOrg.com) -- Researchers at North Carolina State University have developed a new approach to software development that will allow common computer programs to run up to 20 percent faster and possibly incorporate new security ...

Recommended for you

Coping with floods—of water and data

Dec 19, 2014

Halloween 2013 brought real terror to an Austin, Texas, neighborhood, when a flash flood killed four residents and damaged roughly 1,200 homes. Following torrential rains, Onion Creek swept over its banks and inundated the ...

Cloud computing helps make sense of cloud forests

Dec 17, 2014

The forests that surround Campos do Jordao are among the foggiest places on Earth. With a canopy shrouded in mist much of time, these are the renowned cloud forests of the Brazilian state of São Paulo. It is here that researchers ...

User comments : 0

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.