Open-source GPU could push computing power to the next level

January 19, 2016
Binghamton University computer science assistant professor Timothy Miller co-author developed Nyami, a synthesizable graphics processor unit (GPU) architectural model for general-purpose and graphics-specific workloads. Credit: Jonathan Cohen, Binghamton University

Researchers at Binghamton University have become the first to use an open-source graphics processor unit (GPU) for research.

Binghamton University computer science assistant professor Timothy Miller, Aaron Carpenter and graduate student Philip Dexterm, along with co-author Jeff Bush, have developed Nyami, a synthesizable graphics processor unit (GPU) architectural model for general-purpose and graphics-specific workloads. This marks the first time a team has taken an open-source GPU design and run a series of experiments on it to see how different hardware and software configurations would affect the circuit's performance.

According to Miller, the results will help other scientists make their own GPUs and push computing power to the next level.

"As a researcher, it's important to have tools for realistically evaluating new ideas that may improve performance, , or other challenges in processor architecture," Miller said. "While simulators may take shortcuts, an actual synthesizable open source processor can't cut any corners, so we can say that any experimental results we get are especially reliable."

GPUs have existed for about 40 years and are typically found on commercial video or graphics cards inside of a computer or gaming console. The specialized circuits have computing power designed to make images appear smoother and more vibrant on a screen. There has recently been a movement to see if the chip can be applied to non-graphical computations such as algorithms processing large chunks of data.

"We weren't necessarily looking for novelty in the results, so much as we wanted to create a new tool and then show how it could be used," said Carpenter. "I hope people experiment more effectively on GPUs, as both hobbyists and researchers, creating a more efficient design for future GPUs."

The open-source GPU that the Binghamton team used for their research was the first of its kind. Although thousands of GPUs are produced each year commercially, this is the first that can be modified by enthusiasts and researchers to get a sense of how changes may affect mainstream chips. Bush, the director of software engineering at Roku, was the lead author on the paper.

"It was bad for the open-source community that GPU manufacturers had all decided to keep their chip specifications secret. That prevented open source developers from writing software that could utilize that hardware," Miller said. Miller began working on similar projects in 2004, while Bush started working on Nyami in 2010. "This makes it easier for other researchers to conduct experiments of their own, because they don't have to reinvent the wheel. With contributions from the 'open hardware' community, we can incorporate more creative ideas and produce an increasingly better tool."

The ramifications of the findings could make processors easier for researchers to work with and explore different design tradeoffs. Dexter, Miller, Carpenter and Bush have paved a new road that could lead to discoveries affecting everything from space travel to heart surgery.

"I've got a list of paper research ideas we can explore using Nyuzi [the chip has since been renamed], focusing on various performance bottlenecks. The idea is to look for things that make Nyuzi inefficient compared to other GPUs and address those as research problems. We can also use Nyuzi as a platform for conducting research that isn't GPU-specific, like energy efficiency and reliability," Miller said.

The paper, "Nyami: A Synthesizable GPU Architectural Model for General-Purpose and Graphics-Specific Workloads" appeared in International Symposium on Performance Analysis of Systems and Software.

Explore further: What powers Facebook and Google's AI – and how computers could mimic brains

More information: Paper:

Related Stories

Software tool helps tap into the power of graphics processing

May 17, 2010

Today's computers rely on powerful graphics processing units (GPUs) to create the spectacular graphics in video games. In fact, these GPUs are now more powerful than the traditional central processing units (CPUs) - or brains ...

ARM asks Khronos for OpenCL nod for Midgard GPU

August 5, 2012

( -- ARM wastes no time taking every opportunity to prove its reputation as "GPU computing" kingpins. GPU computing is seen as having a bright future, where the computational performance of the GPU, which was historically ...

Recommended for you

WhatsApp vulnerable to snooping: report

January 13, 2017

The Facebook-owned mobile messaging service WhatsApp is vulnerable to interception, the Guardian newspaper reported on Friday, sparking concern over an app advertised as putting an emphasis on privacy.

US gov't accuses Fiat Chrysler of cheating on emissions

January 12, 2017

The U.S. government accused Fiat Chrysler on Thursday of failing to disclose software in some of its pickups and SUVs with diesel engines that allows them to emit more pollution than allowed under the Clean Air Act.


Adjust slider to filter visible comments by rank

Display comments: newest first

not rated yet Jan 20, 2016
That prevented open source developers from writing software that could utilize that hardware

That's not entirely true. All the vendors use common programming interfaces for the hardware, such as DirectX or OpenGL, providing means to utilize the hardware in both open and closed source software.

The real problem was that the Open Source community doesn't really want to play ball with the hardware vendors when it comes to drivers that implement those interfaces. They ultimately want the drivers to be open source, but the manufacturers don't want to give away their trade secrets to competitors, so the discussion went down to revealing the "specifications" that would let the community to at least talk to the hardware directly.

AMD did release specifications to a large number of their chips, but then the community didn't really do anything with it because it turned out that the Open Source graphics system they were using ( was too much behind the times.
Jan 20, 2016
This comment has been removed by a moderator.
not rated yet Jan 21, 2016
On the other hand, the GPU chips are primarily designed for computing of 2D/3D graphics.

But they also provide interfaces for computation through OpenCL and the like.

The actual hardware is now turning more and more generic as a huge blob of parallel programmable DSPs with a fast memory interface, and the actual implementation or the algorithms that turn data into graphics are loaded in from the drivers.

It used to be that the graphics programming interface was completely in software and running primarily on the CPU, then the 3D "accelerator" cards came in to substitute parts of that code with dedicated hardware that would perform a particular function and return an answer. The hardware was a simple state machine; if in state X perform function Y to data Z.

Then more and more functions were replaced with hardware implementations and the functions got more complex, until they were finally replaced with a bunch of small programmable CPUs with the graphics API on top.
not rated yet Jan 21, 2016
The problem for the Open Source community is that up until embarrasingly late, the graphics stack and the ways of interacting with GPU hardware in Linux and the like, through X.Org, were treating the hardware as if it was still the old "accelerator" type stupid state machine.

There were no provisions for e.g memory management on the GPU hardware because it was just assumed to implement a hardware version of some elementary OpenGL function call and that's that. Much of the functionality has since been "patched in", but the fully open source provisions for writing a modern GPU driver are still very flimsy and there's many competing projects to remedy that.

Hence why when companies like nVidia write graphics drivers for Linux (they're writing it for running on top of Linux) they make use of the modularity of the system and substitute and expand large parts of the X.Org code with their own proprietary implementation of the necessary stuff to make it work.
not rated yet Jan 21, 2016
And so...


Among the issues cited for Linux not being ready for the desktop include graphics driver issues, audio problems, hardware compatibility problems, X11, a few issues with Wayland, font problems, and a variety of other issues. There is also cited a lack of cooperation among open-source developers, fragmentation among Linux desktops, software issues, and more.

So the real story boils down to this: OSS developers can't get their heads out of their asses for long enough to provide a proper interface for hardware manufacturers to write drivers on, so the drivers that they end up getting are found lacking and ill supported.

Instead of rallying together for fixing the situation, the Open Source community is instead asking for the HW manufacturers to give them direct access to the hardware/firmware to bypass the whole mess of their own making. HW manufacturers don't care enough.
Spaced out Engineer
not rated yet Jan 21, 2016
We need this with deep learning for CodePhage and Helium. If we can integrate multiple heuristic approaches for refining architecture we can created computing devices application specific or memory morphologies beyond mere cache. Splay lists are pushing the bounds for generalized problem spaces in software.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.