Researchers release open source code for powerful image detection algorithm

February 11, 2016 by Matthew Chin
An LED light (left) and an image of the same light produced using the Phase Stretch Transform-based algorithm. Credit: Bahram Jalal

A UCLA Engineering research group has made public the computer code for an algorithm that helps computers process images at high speeds and "see" them in ways that human eyes cannot. The researchers say the code could eventually be used in face, fingerprint and iris recognition for high-tech security, as well as in self-driving cars' navigation systems or for inspecting industrial products.

The algorithm performs a mathematical operation that identifies objects' edges and then detects and extracts their features. It also can enhance and recognize objects' textures.

The algorithm was developed by a group led by Bahram Jalali, a UCLA professor of electrical engineering and holder of the Northrop-Grumman Chair in Optoelectronics, and senior researcher Mohammad Asghari.

It is available for free download on two platforms, Github and Matlab File Exchange. Making it available as will allow researchers to work together to study, use and improve the algorithm, and to freely modify and distribute it. It also will enable users to incorporate the technology into computer vision and pattern recognition applications and other image-processing applications.

The Phase Stretch Transform algorithm, as it is known, is a physics-inspired computational approach to processing images and information. The algorithm grew out of UCLA research on a technique called photonic time stretch, which has been used for ultrafast imaging and detecting cancer cells in blood.

The also helps computers see features of objects that aren't visible using standard imaging techniques. For example, it might be used to detect an LED lamp's internal structure, which—using conventional techniques—would be obscured by the brightness of its light, and it can see distant stars that would normally be invisible in astronomical images.

Explore further: New algorithm improves speed and accuracy of pedestrian detection

Related Stories

Algorithm helps turn smartphones into 3-D scanners

December 22, 2015

While 3-D printers have become relatively cheap and available, 3-D scanners have lagged well behind. But now, an algorithm developed by Brown University researchers my help bring high-quality 3-D scanning capability to off-the-shelf ...

Teaching robots to see

December 15, 2014

Syed Saud Naqvi, a PhD student from Pakistan, is working on an algorithm to help computer programmes and robots to view static images in a way that is closer to how humans see.

New data compression method reduces big-data bottleneck

December 19, 2013

(Phys.org) —In creating an entirely new way to compress data, a team of researchers from the UCLA Henry Samueli School of Engineering and Applied Science has drawn inspiration from physics and the arts. The result is a ...

Recommended for you

Smartphones are revolutionizing medicine

February 18, 2017

Smartphones are revolutionizing the diagnosis and treatment of illnesses, thanks to add-ons and apps that make their ubiquitous small screens into medical devices, researchers say.

Six-legged robots faster than nature-inspired gait

February 17, 2017

When vertebrates run, their legs exhibit minimal contact with the ground. But insects are different. These six-legged creatures run fastest using a three-legged, or "tripod" gait where they have three legs on the ground at ...

0 comments

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.