New supercomputer 'sees' well enough to drive a car someday (w/ Video)

New supercomputer 'sees' well enough to drive a car someday
NeuFlow is a supercomputer that mimics human vision to analyze complex environments, such as this street scene. (Image: Eugenio Culurciello/e-Lab)

( -- Navigating our way down the street is something most of us take for granted; we seem to recognize cars, other people, trees and lampposts instantaneously and without much thought. In fact, visually interpreting our environment as quickly as we do is an astonishing feat requiring an enormous number of computations-which is just one reason that coming up with a computer-driven system that can mimic the human brain in visually recognizing objects has proven so difficult.

Now Eugenio Culurciello of Yale’s School of Engineering & Applied Science has developed a supercomputer based on the human visual system that operates much more quickly and efficiently than ever before. Dubbed NeuFlow, the system takes its inspiration from the mammalian visual system, mimicking its neural network to quickly interpret the world around it. Culurciello presented the results Sept. 15 at the High Performance Embedded Computing (HPEC) workshop in Boston, Mass.

The system uses complex vision algorithms developed by Yann LeCun at New York University to run large neural networks for synthetic vision applications. One idea—the one Culurciello and LeCun are focusing on, is a system that would allow cars to drive themselves. In order to be able to recognize the various objects encountered on the road—such as other cars, people, stoplights, sidewalks, not to mention the road itself—NeuFlow processes tens of mexapixel images in real time.

The system is also extremely efficient, simultaneously running more than 100 billion operations per second using only a few watts (that’s less than the power a cell phone uses) to accomplish what it takes bench-top computers with multiple graphic processors more than 300 watts to achieve.

“One of our first prototypes of this system is already capable of outperforming graphic processors on vision tasks,” Culurciello said.

Culurciello embedded the on a single chip, making the system much smaller, yet more powerful and efficient, than full-scale computers. “The complete system is going to be no bigger than a wallet, so it could easily be embedded in cars and other places,” Culurciello said.

Beyond the autonomous car navigation, the system could be used to improve robot navigation into dangerous or difficult-to-reach locations, to provide 360-degree synthetic vision for soldiers in combat situations, or in assisted living situations where it could be used to monitor motion and call for help should an elderly person fall, for example.

Explore further

Roadrunner supercomputer puts research at a new scale

More information: … svision/svision.html
Provided by Yale University
Citation: New supercomputer 'sees' well enough to drive a car someday (w/ Video) (2010, September 15) retrieved 18 August 2019 from
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Feedback to editors

User comments

Sep 15, 2010
Awesome, by the look of progress in these areas a general AI might be possible in 10-20 years..

Just need to bring all these specialized AI's together.

Sep 16, 2010

Just need to bring all these specialized AI's together.

And that i think is where the problem lies and why i think 20 years is too soon.

Sep 16, 2010
I think 20 years is somewhat reasonable. If we can move from monochrome display cell phones with no internet capabilities to iPhone in about 10 years, we just might create AI. I hope to see it in my lifetime..

Sep 16, 2010
FPGA chips are quite neat stuff in this sort of computation. They're a compromize between really fast and expensive DSP chips that only do one thing, and programmable and cheap but inefficient CPUs.

What this new "supercomputer" is, is basically a programmable gate array chip that simulates a bunch of other circuits, which unlike in a computer simulation, are actually physically parallel to each other so they can compute really fast despite being somewhat slow compared to actual CPUs.

Programmable means that you can change the physical configuration and connections of the logic gates inside it, so you can turn the same chip into pretty much anything that fits within its limitations.

Then, once you've figured out what works, you can take the circuit that it simulates and turn it into a mass-manufactured DSP chip that uses less power and does the same thing faster. Take the scaffolding out to let the building stand on its own, so to speak.

Sep 16, 2010
But, I can assure you a cellphone uses less than "a few watts".

My cellphone's battery is about 1000 mAh, and its nominal voltage is 3.7 volts, meaning it has 3.7 Watt hours of energy in it.

If my cellphone used just 1 watts of power, I would have only 3,7 hours of operating time. Quite the contrary, on standby my phone can last for two weeks, meaning it draws just 3 millionths of a watt, and even while I'm talking on it, it stays on for 5 hours, which means it draws less than a watt of power. Using the cell radio is the most power-intensive task you do on a cellphone, even on a smartphone.

Sep 16, 2010
Mr Eugenio Culurciello is a self promoting bullshitter. All his claims about relative performance are not only unresonable but absolute forgeries of the truth. You can easily verify this for yourself, just check the floating point performance of any modern GPU.

It's as if he woke up one morning and realized "Holy shit computers can recognize patterns? How can I take credit for this?"

Sep 16, 2010
And that i think is where the problem lies and why i think 20 years is too soon.

15 years is about the timeframe for the first supercomputer able to brute force simulate the number of neurons in a human brain, at least with the current complexity and (in)efficiency of simulation (last time I heard, Blue Brain runs 200 simulations for each neuron to model the behavior, one neuron per processor).

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more