Adapteva $99 parallel processing boards targeted for summer

Apr 22, 2013 by Nancy Owano report

(Phys.org) —The semiconductor technology company Adapteva earlier this month featured its parallel-processing board for Linux supercomputingts at a major Linux event, and the board is targeted to ship this summer. The board will be going out to those who pledged money in last year's Adapteva Kickstarter campaign and to other customers. Not a minute too soon. To hear the story of computing as Adapteva tells it, the future of computing is parallel. Big-data and other demands pose a processor challenge and Adapteva recognizes a problem in energy efficiency that is calling for action. Adapteva is on a mission to "democratize" access to parallel computing.

The processor board running on Linux is called Parallella. According to the page, pledges totaled $898,921 from 4,965 backers when Adapteva set its goal for funding. The company decided to go through the crowdfunding route in order to produce the Parallella boards in volume. They sought funding for adequate tooling to accommodate volume, to make this board effort viable, to get the platform "out there."

The company's hurry-up drive on making access easier for more people has a sense of urgency because the company wants to speed adoption of parallel processing in the industry. Founded in 2008, the company's has gained traction with government labs, corporate labs, and schools but getting to buy into parallel computing is challenging. They were convinced that the only way to create a sustainable parallel was through a grass roots movement. The company founder, Andreas Olofsson, said that parallel computing is the only way to scale to , performance, and cost. Systems, he stated, need to be parallel and they need to be open "Our 99 dollar kit is going to be completely open," he said, and the Parallella will educate the masses on how to do parallel computing.

This video is not supported by your browser at this time.

"We don't have time to wait for the rest of the industry to come around to the fact that parallel computing is the only path forward and that we need to act now. We hope you will join us in our mission to change the way computers are built," they had said when appealing earlier for support.

The Lexington, Massachusetts, company has now announced they built the first Parallella board for Linux supercomputing. They made the announcement at the Linux Collaboration Summit in San Francisco earlier this month. (The summit is a gathering of core kernel developers, distribution maintainers, ISVs, end users, system vendors and various other community organizations.) The Linux distribution being used is Ubuntu 12.04

Adapteva's board is the size of a credit card. This comes with a dual-core ARM A9 processor and a 64-core Epiphany Multicore Accelerator chip. Parallela's details include 1GB of RAM, two USB 2.0 ports, a microSD slot, and an HDMI connection. Active components and the majority of the standard connectors are on the top side of the board. The expansion connectors and microSD card connector are at the bottom side of the board.

Olofsson said the company's first audience target is developers. "We need to make sure that every programmer has access to cheap and open parallel hardware and development tools," said an Adapteva program note for the Linux event. Massively will become truly ubiquitous once the vast majority of programmers and programs know how to take full advantage of the underlying hardware They see a critical need to close the knowledge gap in parallel programming. They said their targeted our second tier are the people who just want an awesome computer for $99.

Platform reference design and drivers are now available.

Explore further: Kickstarter project SAM kit helps teach hardware system coding

More information: www.parallella.org/2013/04/02/… a-hardware-platform/
www.parallella.org/2013/04/16/… -name-is-parallella/

Related Stories

Intel flirts with exascale leap in supercomputing

Jun 19, 2012

(Phys.org) -- If exascale range is the next destination post in high-performance computing then Intel has a safe ticket to ride. Intel says its new Xeon Phi line of chips is an early stepping stone toward ...

NVIDIA Ushers In the Era of Personal Supercomputing

Jun 21, 2007

High-performance computing in fields like the geosciences, molecular biology, and medical diagnostics enable discoveries that transform billions of lives every day. Universities, research institutions, and ...

Ubuntu 7.04 to Arrive April 19

Apr 17, 2007

For Linux business users, the most important Linux release of 2007 so far is Red Hat Enterprise Linux 5. But for most other Linux fans, the upcoming release of Ubuntu Version 7.04 on April 19 demands more attention.

Recommended for you

California bans paparazzi drones

5 hours ago

California on Tuesday approved a law which will prevent paparazzi from using drones to take photos of celebrities, among a series of measures aimed at tightening protection of privacy.

User comments : 9

Adjust slider to filter visible comments by rank

Display comments: newest first

grondilu
2 / 5 (1) Apr 22, 2013
We need more of these kinds of projects. We need moar computing power, and if it's on an open hardware, it's even better.
Requiem
1 / 5 (6) Apr 23, 2013
We don't need more computing power, we need less abstraction. Every time processors get faster, people simply soak it all up by doing less development work. It's now at the point where your typical "bonafide developer", supposedly a working professional, can only do the equivalent of a child playing with legos. Ask him to make a shape that he doesn't have legos for, and he sits there looking at you with a blank stare.

Idiocracy here we come. Pretty soon all of the lego-builders will leave for Mars, or be slaughtered by hoards of pitch fork-wielding citizens wondering where their supply of Brawndo has gone, or something.
Requiem
1 / 5 (5) Apr 23, 2013
I should amend that last statement to say "We don't need more computing power nearly as much as we need less abstraction."

I make existing servers go thousands of times faster without doing anything to the hardware all the time in my line of work as an ultra-scale consultant and infrastructure provider, and it's worth noting that the problems are NEVER solved by throwing more hardware at them.
VendicarE
5 / 5 (1) Apr 23, 2013
The 45 Ghz figure comes from adding the clock speed of all 64 accelerator processors.

So the individual clock speed for each of those sub-processors is 700 Mhz.

TI has been rating their multi-core DSP's like this for years.
VendicarE
5 / 5 (1) Apr 23, 2013
I agree with Requiem. Modern Software efficiency is spectacularly poor. Even the best compilers only manage to produce code that run within a factor of 4 of properly optimized code.

And that is before that code is assembled into complex component assemblies that are even more poorly organized.

I have seen programmers implement delay loops and then have their compiler inline those loops to reduce their execution time.

In one instance I reduced a program's executable size by a factor of 10 simply by removing redundant code. Worked the same as before. It was just 10 times smaller and a dozen times faster.
Requiem
1 / 5 (4) Apr 24, 2013
In highly concurrent systems the result of these "complex component assemblies" manifests itself as crippling bottlenecks at various layers of the architecture. What's especially irritating is that the people who develop the software have absolutely no concept of what's even going wrong, much less how they would go about solving it.

The upside, at least for me, is that a mere order of magnitude is a disappointing performance improvement by the time I'm done eliminating bottlenecks, because they are always just so terribly destructive compared with a simple problem of inefficient code, at least that scales with workers and probably only uses CPU. Most people that deal with high concurrency would be dancing in the streets if CPU was their bottleneck.

It's an interesting contrast with what you mention about optimizing code. That's obviously very important for things like massively scaled scientific models and whatnot. But in my line of work, the code efficiency takes back seat.
Requiem
1 / 5 (4) Apr 24, 2013
Things like database structure, staged data tables that are updated on an event-driven basis, indexes and matching queries, various caching layers like sphinx/memcached/xcache, doing EVERYTHING that modifies a database in an event-driven way generally, just trying to avoid bottlenecking the components that don't easily scale in general. And just doing and storing things in an informed way.

It's super easy to scale php/python/perl/ruby/tomcat/etc workers. I often find myself reducing the complexity of queries and shifting the array sorting and whatnot into PHP or whatever with great results, for example.
antialias_physorg
1 / 5 (2) Apr 24, 2013
"We don't need more computing power nearly as much as we need less abstraction."

It's always a tradeoff.

Today you don't build purpose built oftware that will run on one hardware platform and have a closed set of requirements. You build with an eye towards extending the software in the future; adding functionality and cross platform capabilities. Programs are also so large nowadays that you program in teams - which means that you have to find common ground on style and capabilities, as multiple people need to be able to service multiple parts of the system.

Yes: purpose built one-shot systems can be more efficient. But whenever you want a new one you have to basically rebuild from scratch - and that is expensive. Not only in programming time, but in terms of certification, testing and (possibly) risk assessment.

That said: an eye for efficiency is never bad. But coding solely for efficiency is usually the first step to unsupportable code.
Requiem
1 / 5 (4) Apr 24, 2013
Once you have 50,000+ people using your stuff at once, you really have no choice but to abandon abstraction and ensure the transfer of institutional knowledge to minimize growing pains, at least in certain aspects of the architecture. As I mentioned, there are certain aspects of the architecture that you can shift work into which scale as easily as throwing in more servers, mounting up your gfs/zfs/nfs/etc share and adding them to the load balancer pool. In these cases abstraction is fine, as long as it's not dictating storage principals.

This is assuming of course that your application is even moderately driven by dynamic data.