New supercomputer 'sees' well enough to drive a car someday (w/ Video)

Sep 15, 2010
NeuFlow is a supercomputer that mimics human vision to analyze complex environments, such as this street scene. (Image: Eugenio Culurciello/e-Lab)

(PhysOrg.com) -- Navigating our way down the street is something most of us take for granted; we seem to recognize cars, other people, trees and lampposts instantaneously and without much thought. In fact, visually interpreting our environment as quickly as we do is an astonishing feat requiring an enormous number of computations-which is just one reason that coming up with a computer-driven system that can mimic the human brain in visually recognizing objects has proven so difficult.

Now Eugenio Culurciello of Yale’s School of Engineering & Applied Science has developed a supercomputer based on the human visual system that operates much more quickly and efficiently than ever before. Dubbed NeuFlow, the system takes its inspiration from the mammalian visual system, mimicking its neural network to quickly interpret the world around it. Culurciello presented the results Sept. 15 at the High Performance Embedded Computing (HPEC) workshop in Boston, Mass.

The system uses complex vision algorithms developed by Yann LeCun at New York University to run large neural networks for synthetic vision applications. One idea—the one Culurciello and LeCun are focusing on, is a system that would allow cars to drive themselves. In order to be able to recognize the various objects encountered on the road—such as other cars, people, stoplights, sidewalks, not to mention the road itself—NeuFlow processes tens of mexapixel images in real time.

The system is also extremely efficient, simultaneously running more than 100 billion operations per second using only a few watts (that’s less than the power a cell phone uses) to accomplish what it takes bench-top computers with multiple graphic processors more than 300 watts to achieve.

This video is not supported by your browser at this time.

“One of our first prototypes of this system is already capable of outperforming graphic processors on vision tasks,” Culurciello said.

Culurciello embedded the on a single chip, making the system much smaller, yet more powerful and efficient, than full-scale computers. “The complete system is going to be no bigger than a wallet, so it could easily be embedded in cars and other places,” Culurciello said.

Beyond the autonomous car navigation, the system could be used to improve robot navigation into dangerous or difficult-to-reach locations, to provide 360-degree synthetic vision for soldiers in combat situations, or in assisted living situations where it could be used to monitor motion and call for help should an elderly person fall, for example.

Explore further: An android opera: Japan's Shibuya plots new era of robot music

More information: www.eng.yale.edu/elab/research… svision/svision.html

Related Stories

Roadrunner supercomputer puts research at a new scale

Jun 12, 2008

Less than a week after Los Alamos National Laboratory's Roadrunner supercomputer began operating at world-record petaflop/s data-processing speeds, Los Alamos researchers are already using the computer to ...

An artificial eye on your driving

Apr 20, 2010

With just a half second's notice, a driver can swerve to avoid a fatal accident or slam on the brakes to miss hitting a child running after a ball. But first, the driver must perceive the danger.

Learning about brains from computers, and vice versa

Feb 15, 2008

For many years, Tomaso Poggio’s lab at MIT ran two parallel lines of research. Some projects were aimed at understanding how the brain works, using complex computational models. Others were aimed at improving the abilities ...

Non-Bliding Headlights

Feb 25, 2005

Russian scientists from Dimitrovgrad (Ul'yanovsk area) have designed a new non-blinding headlight system. Its use in cars will significantly decrease the risk of driving at night, because the oncoming light will be duller, ...

Recommended for you

Facebook sues law firms, claims fraud

4 hours ago

Facebook is suing several law firms that represented a man who claimed he owned half of the social network and was entitled to billions of dollars from the company and its CEO Mark Zuckerberg.

IBM 3Q disappoints as it sheds 'empty calories'

4 hours ago

IBM disappointed investors Monday, reporting weak revenue growth again and a big charge to shed its costly chipmaking division as the tech giant tries to steer its business toward cloud computing and social-mobile ...

User comments : 7

Adjust slider to filter visible comments by rank

Display comments: newest first

blazingspark
5 / 5 (2) Sep 15, 2010
Awesome, by the look of progress in these areas a general AI might be possible in 10-20 years..

Just need to bring all these specialized AI's together.
MarkyMark
1 / 5 (1) Sep 16, 2010

Just need to bring all these specialized AI's together.

And that i think is where the problem lies and why i think 20 years is too soon.
plasticpower
3 / 5 (2) Sep 16, 2010
I think 20 years is somewhat reasonable. If we can move from monochrome display cell phones with no internet capabilities to iPhone in about 10 years, we just might create AI. I hope to see it in my lifetime..
Eikka
not rated yet Sep 16, 2010
FPGA chips are quite neat stuff in this sort of computation. They're a compromize between really fast and expensive DSP chips that only do one thing, and programmable and cheap but inefficient CPUs.

What this new "supercomputer" is, is basically a programmable gate array chip that simulates a bunch of other circuits, which unlike in a computer simulation, are actually physically parallel to each other so they can compute really fast despite being somewhat slow compared to actual CPUs.

Programmable means that you can change the physical configuration and connections of the logic gates inside it, so you can turn the same chip into pretty much anything that fits within its limitations.

Then, once you've figured out what works, you can take the circuit that it simulates and turn it into a mass-manufactured DSP chip that uses less power and does the same thing faster. Take the scaffolding out to let the building stand on its own, so to speak.
Eikka
not rated yet Sep 16, 2010
But, I can assure you a cellphone uses less than "a few watts".

My cellphone's battery is about 1000 mAh, and its nominal voltage is 3.7 volts, meaning it has 3.7 Watt hours of energy in it.

If my cellphone used just 1 watts of power, I would have only 3,7 hours of operating time. Quite the contrary, on standby my phone can last for two weeks, meaning it draws just 3 millionths of a watt, and even while I'm talking on it, it stays on for 5 hours, which means it draws less than a watt of power. Using the cell radio is the most power-intensive task you do on a cellphone, even on a smartphone.
NanoStuff
not rated yet Sep 16, 2010
Mr Eugenio Culurciello is a self promoting bullshitter. All his claims about relative performance are not only unresonable but absolute forgeries of the truth. You can easily verify this for yourself, just check the floating point performance of any modern GPU.

It's as if he woke up one morning and realized "Holy shit computers can recognize patterns? How can I take credit for this?"
DaffyDuck
not rated yet Sep 16, 2010
And that i think is where the problem lies and why i think 20 years is too soon.


15 years is about the timeframe for the first supercomputer able to brute force simulate the number of neurons in a human brain, at least with the current complexity and (in)efficiency of simulation (last time I heard, Blue Brain runs 200 simulations for each neuron to model the behavior, one neuron per processor).