New supercomputer 'sees' well enough to drive a car someday (w/ Video)

September 15, 2010, Yale University

NeuFlow is a supercomputer that mimics human vision to analyze complex environments, such as this street scene. (Image: Eugenio Culurciello/e-Lab)
( -- Navigating our way down the street is something most of us take for granted; we seem to recognize cars, other people, trees and lampposts instantaneously and without much thought. In fact, visually interpreting our environment as quickly as we do is an astonishing feat requiring an enormous number of computations-which is just one reason that coming up with a computer-driven system that can mimic the human brain in visually recognizing objects has proven so difficult.

Now Eugenio Culurciello of Yale’s School of Engineering & Applied Science has developed a supercomputer based on the human visual system that operates much more quickly and efficiently than ever before. Dubbed NeuFlow, the system takes its inspiration from the mammalian visual system, mimicking its neural network to quickly interpret the world around it. Culurciello presented the results Sept. 15 at the High Performance Embedded Computing (HPEC) workshop in Boston, Mass.

The system uses complex vision algorithms developed by Yann LeCun at New York University to run large neural networks for synthetic vision applications. One idea—the one Culurciello and LeCun are focusing on, is a system that would allow cars to drive themselves. In order to be able to recognize the various objects encountered on the road—such as other cars, people, stoplights, sidewalks, not to mention the road itself—NeuFlow processes tens of mexapixel images in real time.

The system is also extremely efficient, simultaneously running more than 100 billion operations per second using only a few watts (that’s less than the power a cell phone uses) to accomplish what it takes bench-top computers with multiple graphic processors more than 300 watts to achieve.

“One of our first prototypes of this system is already capable of outperforming graphic processors on vision tasks,” Culurciello said.

Culurciello embedded the on a single chip, making the system much smaller, yet more powerful and efficient, than full-scale computers. “The complete system is going to be no bigger than a wallet, so it could easily be embedded in cars and other places,” Culurciello said.

Beyond the autonomous car navigation, the system could be used to improve robot navigation into dangerous or difficult-to-reach locations, to provide 360-degree synthetic vision for soldiers in combat situations, or in assisted living situations where it could be used to monitor motion and call for help should an elderly person fall, for example.

Explore further: Roadrunner supercomputer puts research at a new scale

More information: … svision/svision.html

Related Stories

Roadrunner supercomputer puts research at a new scale

June 12, 2008

Less than a week after Los Alamos National Laboratory's Roadrunner supercomputer began operating at world-record petaflop/s data-processing speeds, Los Alamos researchers are already using the computer to mimic extremely ...

An artificial eye on your driving

April 20, 2010

With just a half second's notice, a driver can swerve to avoid a fatal accident or slam on the brakes to miss hitting a child running after a ball. But first, the driver must perceive the danger.

Learning about brains from computers, and vice versa

February 15, 2008

For many years, Tomaso Poggio’s lab at MIT ran two parallel lines of research. Some projects were aimed at understanding how the brain works, using complex computational models. Others were aimed at improving the abilities ...

Non-Bliding Headlights

February 25, 2005

Russian scientists from Dimitrovgrad (Ul'yanovsk area) have designed a new non-blinding headlight system. Its use in cars will significantly decrease the risk of driving at night, because the oncoming light will be duller, ...

Recommended for you

Female golden snub-nosed monkeys share nursing of young

February 21, 2019

An international team of researchers including The University of Western Australia and China's Central South University of Forestry and Technology has discovered that female golden snub-nosed monkeys in China are happy to ...

When does one of the central ideas in economics work?

February 20, 2019

The concept of equilibrium is one of the most central ideas in economics. It is one of the core assumptions in the vast majority of economic models, including models used by policymakers on issues ranging from monetary policy ...

In colliding galaxies, a pipsqueak shines bright

February 20, 2019

In the nearby Whirlpool galaxy and its companion galaxy, M51b, two supermassive black holes heat up and devour surrounding material. These two monsters should be the most luminous X-ray sources in sight, but a new study using ...

Research reveals why the zebra got its stripes

February 20, 2019

Why do zebras have stripes? A study published in PLOS ONE today takes us another step closer to answering this puzzling question and to understanding how stripes actually work.


Adjust slider to filter visible comments by rank

Display comments: newest first

5 / 5 (2) Sep 15, 2010
Awesome, by the look of progress in these areas a general AI might be possible in 10-20 years..

Just need to bring all these specialized AI's together.
1 / 5 (1) Sep 16, 2010

Just need to bring all these specialized AI's together.

And that i think is where the problem lies and why i think 20 years is too soon.
3 / 5 (2) Sep 16, 2010
I think 20 years is somewhat reasonable. If we can move from monochrome display cell phones with no internet capabilities to iPhone in about 10 years, we just might create AI. I hope to see it in my lifetime..
not rated yet Sep 16, 2010
FPGA chips are quite neat stuff in this sort of computation. They're a compromize between really fast and expensive DSP chips that only do one thing, and programmable and cheap but inefficient CPUs.

What this new "supercomputer" is, is basically a programmable gate array chip that simulates a bunch of other circuits, which unlike in a computer simulation, are actually physically parallel to each other so they can compute really fast despite being somewhat slow compared to actual CPUs.

Programmable means that you can change the physical configuration and connections of the logic gates inside it, so you can turn the same chip into pretty much anything that fits within its limitations.

Then, once you've figured out what works, you can take the circuit that it simulates and turn it into a mass-manufactured DSP chip that uses less power and does the same thing faster. Take the scaffolding out to let the building stand on its own, so to speak.
not rated yet Sep 16, 2010
But, I can assure you a cellphone uses less than "a few watts".

My cellphone's battery is about 1000 mAh, and its nominal voltage is 3.7 volts, meaning it has 3.7 Watt hours of energy in it.

If my cellphone used just 1 watts of power, I would have only 3,7 hours of operating time. Quite the contrary, on standby my phone can last for two weeks, meaning it draws just 3 millionths of a watt, and even while I'm talking on it, it stays on for 5 hours, which means it draws less than a watt of power. Using the cell radio is the most power-intensive task you do on a cellphone, even on a smartphone.
not rated yet Sep 16, 2010
Mr Eugenio Culurciello is a self promoting bullshitter. All his claims about relative performance are not only unresonable but absolute forgeries of the truth. You can easily verify this for yourself, just check the floating point performance of any modern GPU.

It's as if he woke up one morning and realized "Holy shit computers can recognize patterns? How can I take credit for this?"
not rated yet Sep 16, 2010
And that i think is where the problem lies and why i think 20 years is too soon.

15 years is about the timeframe for the first supercomputer able to brute force simulate the number of neurons in a human brain, at least with the current complexity and (in)efficiency of simulation (last time I heard, Blue Brain runs 200 simulations for each neuron to model the behavior, one neuron per processor).

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.