NVIDIA dresses up CUDA parallel computing platform

Jan 28, 2012 by Nancy Owano report
NVIDIA dresses up CUDA parallel computing platform

(PhysOrg.com) -- This week’s NVIDIA announcement of a dressed up version of its CUDA parallel computing platform is targeted as a good news message for engineers, biologists, chemists, physicists, geophysicists, and other researchers on fast-track computations using GPUs. The new version features an LLVM (low-level virtual machine)-based CUDA compiler, new imaging and signal processing functions added to the NVIDIA Performance Primitives library and a redesigned Visual Profiler with automated performance analysis and expert guidance. NVIDIA says the new enhancements are ways to advance simulations and computational work for these users.

CUDA is a parallel and programming model that was created by . The company promotes CUDA as the pathway to achieve dramatic increases in computing performance by harnessing the power of the graphics processing unit (GPU). According to the company, with CUDA, a developer can send C, C++ and Fortran code straight to the GPU; no assembly language is required.

Generally, developers at scientific companies look to GPU computing for speeding up applications for scientific and engineering computing. With this approach, GPU-accelerated applications run the sequential part of their workload on the CPU while accelerating parallel processing on the GPU.

The company notes that a combined team from Harvard Engineering, Harvard Medical School and Brigham & Women's Hospital have used GPUs to simulate blood flow and identify hidden arterial plaque without having to use invasive imaging techniques or exploratory surgery. At NASA, where computer models identify ways to alleviate congestion and keep traffic moving efficiently, a NASA team has made use of GPUs to gain on performance and reduce analysis time.

“When we started creating CUDA, we had a lot of choices for what we could build. The key thing customers said was they didn't want to have to learn a whole new language or API,” said Ian Buck, general manager at NVIDIA. “Some of them were hiring gaming developers because they knew GPUs were fast but didn't know how to get to them.“ He said NVIDIA wanted to provide a solution that could be learned in one session and outperform CPU code.

The revised CUDA platform carries three main changes that are supposed to make parallel programming with GPUs easier and faster.

The Visual Profiler with a few clicks is said to deliver an automated performance analysis of the user’s application. It highlights problem areas and shows links to suggestions for improvement. This eases application acceleration. Also, NVIDIA is transitioning to new LLVM based compiler technology, The compiler is based on the LLVM open-source compiler infrastructure, and can deliver an increase in application performance. (LLVM is an umbrella project that hosts and develops a set of close-knit toolchain components such as assemblers, compilers and debuggers. The LLVM project started in 2000 at the University of Illinois at Urbana-Champaign.)

New imaging and signal processing functions are increasing the size of the NVIDIA Performance Primitives (NPP) library. The updated NPP library can be used for image and algorithms, ranging from basic filtering to advanced workflows.

NVIDIA unveiled CUDA in 2006, announcing CUDA as the world's first solution for general-computing on GPUs. NVIDIA cites some examples on its site of CUDA’s user base today. In the consumer market, nearly every major consumer video application has been, or will soon be, accelerated by CUDA, including products from Adobe, Sony , Elemental Technologies, MotionDSP and LoiLo, according to NVIDIA. In scientific research. CUDA accelerates AMBER, a molecular dynamics simulation used by researchers to speed up new drug discovery.

Explore further: Algorithm accounts for uncertainty to enable more accurate modeling

More information: www.nvidia.com/object/cuda_home_new.html

Related Stories

NVIDIA Ushers In the Era of Personal Supercomputing

Jun 21, 2007

High-performance computing in fields like the geosciences, molecular biology, and medical diagnostics enable discoveries that transform billions of lives every day. Universities, research institutions, and ...

Locating the impossible with 'lightening fast' speed

Dec 20, 2010

A terrorist plants a time bomb along a gas line in a residential neighborhood. He e-mails a photo of the death trap to law enforcement officials, but no one can tell exactly where the bomb is located. 

NVIDIA Announced New Geforce GTX 200 GPUs

Jun 16, 2008

Imagine instead of taking over five hours to convert a video for your iPod, it only takes 35 minutes. Imagine using your PC to simulate protein folding to help find a cure for debilitating diseases. Imagine ...

Recommended for you

Cattle ID system shows its muzzle

Jun 29, 2015

Maybe it sounds like a cow and bull story, but researchers in Egypt are developing a biometric identification system for cattle that could reduce food fraud and allow ranchers to control their stock more efficiently. The ...

Combining personalization and privacy for user data

Jun 29, 2015

Computer scientists and legal experts from Trinity College Dublin and SFI's ADAPT centre are working to marry two of cyberspace's greatest desires, by simultaneously providing enhanced options for user personalisation alongside ...

User comments : 5

Adjust slider to filter visible comments by rank

Display comments: newest first

1 / 5 (4) Jan 28, 2012
Without CUDA hacking passwords would be much harder, Well done nvidia
4.3 / 5 (3) Jan 28, 2012
I would not able to do my research without CUDA, it's a powerful, flexible tool that brings many computational problems that were previously out of reach into the realm of possibility. The Fermi architecture, it's automatic cache in particular, is a triumph.
5 / 5 (2) Jan 29, 2012
A press release from Nvidia, unsurprisingly promoting use of their hardware.
If I were contemplating using GPUs in an application, I'd certainly investigate doing it via platform neutral OpenCL, rather than CUDA.
not rated yet Jan 29, 2012
You need a particular HW and kernel driver to use CUDA = fail. It's just an attempt for vendor locking.
not rated yet Jan 29, 2012
Without CUDA, I would not be able to read thinly veiled astroturfing on Physorg comments.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.