Indiana University student offers Harlan programming language for GPUs

July 4, 2013 by Nancy Owano weblog

( —A doctoral candidate in computer science has come up with a programming language, Harlan, that can leverage the computing power of a GPU. His contribution may turn a corner in working with GPU applications, He just released a programming language called Harlan. It's all new and it's totally dedicated to building applications that run GPUs. The Harlan creator is Eric Holk of Indiana University. As a doctoral candidate, his interests, he said, focus on designing and implementing programming languages that can ease up the production of reliable software that performs well. Easier professed than done, some may argue, when it comes to involvement with GPUs.

Tech sites reviewing his achievement say he has taken on quite a challenge. Programming for GPUs, as ExtremeTech put it, calls for a type of programmer who is willing to spend "a lot of brain cycles dealing with low-level details which distract from the main purpose of the code." Holk stuck to it, attempting to answer his own question: What if a language could be built up from scratch, designed from the start to support GPU programming? Harlan is special in that it can take care of the "grunt work" of GPU programming.

A few key points about Harlan: (1) It can be compiled to OpenCL and can make use of the higher-level languages, Python and Ruby. (2) Syntax is based on Scheme, which is based on Lisp. Actually, when you start talking about Scheme, you become more immersed in Schemes of things. The Petite Chez Scheme is available for download. Chez Scheme is an implementation of Scheme based on an incremental optimizing compiler that produces code quickly. Petite Chez Scheme is a Scheme system compatible with Chez Scheme but uses a fast interpreter in place of the compiler. It was conceived as a runtime environment for compiled Chez Scheme applications, but can also be used as a standalone Scheme system. (3) Harlan runs on Mac OS X 10.6 (Snow Leopard), Mac OS X 10.7 (Lion), Mac OS X 10.8 (Mountain Lion), and "various flavors" of Linux. The github definition of Harlan calls it "a declarative, domain specific language for programming GPUs." According to the site, OpenCL implementations that should work.include the Intel OpenCL SDK, NVIDIA CUDA Toolkit, AMD Accelerated Parallel Processing (APP) SDK.

Holk announced Harlan in his blog as now available to the public as the result of about two years of work "Harlan," he stated, "aims to push the expressiveness of languages available for the GPU further than has been done before." He made note of its native support for rich data structures, including trees and ragged arrays.

Explore further: ARM asks Khronos for OpenCL nod for Midgard GPU

More information: … e-release-of-harlan/

Related Stories

ARM asks Khronos for OpenCL nod for Midgard GPU

August 5, 2012

( -- ARM wastes no time taking every opportunity to prove its reputation as "GPU computing" kingpins. GPU computing is seen as having a bright future, where the computational performance of the GPU, which was historically ...

NVIDIA dresses up CUDA parallel computing platform

January 28, 2012

( -- This week’s NVIDIA announcement of a dressed up version of its CUDA parallel computing platform is targeted as a good news message for engineers, biologists, chemists, physicists, geophysicists, and ...

Google trumpets Dart release as first stable version

October 17, 2012

(—Google on Tuesday released its first stable version of Dart SDK. Dart is a programming language for Web applications that Google thinks will offer an improved, easy to learn, high performance environment for ...

Recommended for you

Volumetric 3-D printing builds on need for speed

December 11, 2017

While additive manufacturing (AM), commonly known as 3-D printing, is enabling engineers and scientists to build parts in configurations and designs never before possible, the impact of the technology has been limited by ...

Tech titans ramp up tools to win over children

December 10, 2017

From smartphone messaging tailored for tikes to computers for classrooms, technology titans are weaving their way into childhoods to form lifelong bonds, raising hackles of advocacy groups.

Mapping out a biorobotic future  

December 8, 2017

You might not think a research area as detailed, technically advanced and futuristic as building robots with living materials would need help getting organized, but that's precisely what Vickie Webster-Wood and a team from ...


Adjust slider to filter visible comments by rank

Display comments: newest first

4 / 5 (4) Jul 04, 2013
I think I'll give this a whirl. Been itching to see what GPU can do vs. multi-core CPU in some areas but until now just been put off by clunkyness of doing GPU operations.
2.3 / 5 (3) Jul 04, 2013
GPU's have lots of computational elements, but few instruction streams.

That is their downfall.
5 / 5 (2) Jul 04, 2013
Most Intel CPU's sold these days have 2 cores and 4 instruction streams, or 8 cores and 16 instruction streams.
3.7 / 5 (3) Jul 04, 2013
"The hundreds of GPU cores correspond 1000% or higher increase of computational speed" - natello

Generally incorrect. Those cores are linked and can not be independently programmed. Typically they are limited to 5 cores and 4 instruction streams.

So for general purpose computation the GPU provides little or no advantage over the CPU.

It is only where the application processes vectors, that GPU's provide substantially better performance over a modern CPU.

In terms of supercomputing, the typical performance is 18 percent of the peak performance.
not rated yet Jul 05, 2013
Real issue with GPUs is not the number of cores or frequency of the processing clock but the memory bottleneck. Much of my optimisation work consists of restructuring code to reduce memory overhead as much as possible. Depending on the task, I can achieve 100x speedups from the host CPU (3.6Ghz AMD vs Nvidia Titan).
not rated yet Jul 07, 2013
So for general purpose computation the GPU provides little or no advantage over the CPU.
I don't think they are trying to usurp the regular CPU, nor normal programming languages. This would only be used in the parts of a program where the GPU can be faster.
not rated yet Jul 07, 2013
"Depending on the task, I can achieve 100x speedups from the host CPU (3.6Ghz AMD vs Nvidia Titan)." - Gmurphy

Yes. That is typically the kind of optimization result that is possible. Sometimes several times higher depending on the problem and the CPU.

Cache misses are very expensive, and can stall the CPU for 50 to 100 cycles as it waits for RAM to respond.

Still... With only a small number of instruction streams, GPU's are as limited as CPU's when it comes to general data processing.

not rated yet Jul 07, 2013

not rated yet Jul 08, 2013
Still... With only a small number of instruction streams, GPU's are as limited as CPU's when it comes to general data processing.
That's why you have both in a computer.
not rated yet Jul 08, 2013
It has only been recently that GPU's have had the ability to conditionally branch.

When you are operating very many computational units at the same time, each processing the same opcode, the conditions for branching are not well defined.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.