Indiana University student offers Harlan programming language for GPUs

Jul 04, 2013 by Nancy Owano weblog

(Phys.org) —A doctoral candidate in computer science has come up with a programming language, Harlan, that can leverage the computing power of a GPU. His contribution may turn a corner in working with GPU applications, He just released a programming language called Harlan. It's all new and it's totally dedicated to building applications that run GPUs. The Harlan creator is Eric Holk of Indiana University. As a doctoral candidate, his interests, he said, focus on designing and implementing programming languages that can ease up the production of reliable software that performs well. Easier professed than done, some may argue, when it comes to involvement with GPUs.

Tech sites reviewing his achievement say he has taken on quite a challenge. Programming for GPUs, as ExtremeTech put it, calls for a type of programmer who is willing to spend "a lot of brain cycles dealing with low-level details which distract from the main purpose of the code." Holk stuck to it, attempting to answer his own question: What if a language could be built up from scratch, designed from the start to support GPU programming? Harlan is special in that it can take care of the "grunt work" of GPU programming.

A few key points about Harlan: (1) It can be compiled to OpenCL and can make use of the higher-level languages, Python and Ruby. (2) Syntax is based on Scheme, which is based on Lisp. Actually, when you start talking about Scheme, you become more immersed in Schemes of things. The Petite Chez Scheme is available for download. Chez Scheme is an implementation of Scheme based on an incremental optimizing compiler that produces code quickly. Petite Chez Scheme is a Scheme system compatible with Chez Scheme but uses a fast interpreter in place of the compiler. It was conceived as a runtime environment for compiled Chez Scheme applications, but can also be used as a standalone Scheme system. (3) Harlan runs on Mac OS X 10.6 (Snow Leopard), Mac OS X 10.7 (Lion), Mac OS X 10.8 (Mountain Lion), and "various flavors" of Linux. The github definition of Harlan calls it "a declarative, domain specific language for programming GPUs." According to the site, OpenCL implementations that should work.include the Intel OpenCL SDK, NVIDIA CUDA Toolkit, AMD Accelerated Parallel Processing (APP) SDK.

Holk announced Harlan in his blog as now available to the public as the result of about two years of work "Harlan," he stated, "aims to push the expressiveness of languages available for the GPU further than has been done before." He made note of its native support for rich data structures, including trees and ragged arrays.

Explore further: A new kind of data-driven predictive methodology

More information: github.com/eholk/harlan
blog.theincredibleholk.org/blo… e-release-of-harlan/

Related Stories

ARM asks Khronos for OpenCL nod for Midgard GPU

Aug 05, 2012

(Phys.org) -- ARM wastes no time taking every opportunity to prove its reputation as "GPU computing" kingpins. GPU computing is seen as having a bright future, where the computational performance of the GPU, which was historically ...

NVIDIA dresses up CUDA parallel computing platform

Jan 28, 2012

(PhysOrg.com) -- This week’s NVIDIA announcement of a dressed up version of its CUDA parallel computing platform is targeted as a good news message for engineers, biologists, chemists, physicists, geophysicists, ...

Google trumpets Dart release as first stable version

Oct 17, 2012

(Phys.org)—Google on Tuesday released its first stable version of Dart SDK. Dart is a programming language for Web applications that Google thinks will offer an improved, easy to learn, high performance ...

Recommended for you

Five ways the superintelligence revolution might happen

Sep 26, 2014

Biological brains are unlikely to be the final stage of intelligence. Machines already have superhuman strength, speed and stamina – and one day they will have superhuman intelligence. This is of course ...

User comments : 10

Adjust slider to filter visible comments by rank

Display comments: newest first

antialias_physorg
4 / 5 (4) Jul 04, 2013
I think I'll give this a whirl. Been itching to see what GPU can do vs. multi-core CPU in some areas but until now just been put off by clunkyness of doing GPU operations.
VendicarE
2.3 / 5 (3) Jul 04, 2013
GPU's have lots of computational elements, but few instruction streams.

That is their downfall.
VendicarE
5 / 5 (2) Jul 04, 2013
Most Intel CPU's sold these days have 2 cores and 4 instruction streams, or 8 cores and 16 instruction streams.
VendicarE
3.7 / 5 (3) Jul 04, 2013
"The hundreds of GPU cores correspond 1000% or higher increase of computational speed" - natello

Generally incorrect. Those cores are linked and can not be independently programmed. Typically they are limited to 5 cores and 4 instruction streams.

So for general purpose computation the GPU provides little or no advantage over the CPU.

It is only where the application processes vectors, that GPU's provide substantially better performance over a modern CPU.

In terms of supercomputing, the typical performance is 18 percent of the peak performance.
gmurphy
not rated yet Jul 05, 2013
Real issue with GPUs is not the number of cores or frequency of the processing clock but the memory bottleneck. Much of my optimisation work consists of restructuring code to reduce memory overhead as much as possible. Depending on the task, I can achieve 100x speedups from the host CPU (3.6Ghz AMD vs Nvidia Titan).
gwrede
not rated yet Jul 07, 2013
So for general purpose computation the GPU provides little or no advantage over the CPU.
I don't think they are trying to usurp the regular CPU, nor normal programming languages. This would only be used in the parts of a program where the GPU can be faster.
VendicarE
not rated yet Jul 07, 2013
"Depending on the task, I can achieve 100x speedups from the host CPU (3.6Ghz AMD vs Nvidia Titan)." - Gmurphy

Yes. That is typically the kind of optimization result that is possible. Sometimes several times higher depending on the problem and the CPU.

Cache misses are very expensive, and can stall the CPU for 50 to 100 cycles as it waits for RAM to respond.

Still... With only a small number of instruction streams, GPU's are as limited as CPU's when it comes to general data processing.

VendicarE
not rated yet Jul 07, 2013


gwrede
not rated yet Jul 08, 2013
Still... With only a small number of instruction streams, GPU's are as limited as CPU's when it comes to general data processing.
That's why you have both in a computer.
VendicarE
not rated yet Jul 08, 2013
It has only been recently that GPU's have had the ability to conditionally branch.

When you are operating very many computational units at the same time, each processing the same opcode, the conditions for branching are not well defined.