Analyzing pixel correlations in photographs improves image analysis

May 21, 2014
Analyzing pixel correlations in photographs improves image analysis
Saliency map (right) of an image (left), which illustrates how a computer model is able to identify salient information such as the high visibility vest from an image based on statistical analysis. Credit: A*STAR Institute for Infocomm Research

A visual saliency technique that can detect and extract relevant information from both still and moving images has many applications for computer image processing. Such a technique can be used to detect motion, distinguish different objects and improve the quality of specific parts of an image through selective compression.

Shijian Lu and colleagues from the A*STAR Institute for Infocomm Research in Singapore have developed a robust and efficient method for capturing such salient information from images and movies. They found that the key lies in the distribution of brightness and color between pairs of pixels.

Digital images are encoded as pixels, or points in an image. To detect an object (for example, a person standing in the foreground), brightness variations between neighboring pixels could be compared. However, considering just individual pixels can be deceiving, as the context is important when seeking to distinguish between important details and unimportant background information.

The technique developed by the A*STAR researchers hence involves counting the pixels in an image based on their color and then plotting the distribution. This makes not only the distribution of colors apparent, but also the frequency at which pairs or neighboring appear. A low frequency of pixel pairs with a certain difference in color indicates a region of high interest, as it denotes clear boundaries between objects. In this way, the salient features can be easily identified; for example, not only large areas of contrast in a photograph, such as a yellow school bus in front of a neutral background, but also contrasts in smaller areas, such as a person wearing a safety vest (see image).

"Our model has great potential for predicting the point in an image that will attract the human eye," comments Lu. "Apart from generic object detection, it can be applied to tasks such as guiding robots or to the smart design of web pages and advertisements."

The next step for the researchers will be to apply this scheme to detecting motion in videos, which follows similar rules as identifying relevant information in still photographs. Moreover, Lu says that their algorithm enables more complex approaches to image analysis.

"An example is the development of computational modeling of the top-down approach of humans looking at a scene," says Lu. "Combining our bottom-up modeling algorithm with a top-down visual search could solve many challenging computer vision problems—such as anomaly detection or target search—in a more robust, efficient and cognitive manner."

Explore further: Mathematically correcting over- and underexposure in photographs

More information: Lu, S., Tan, C. & Lim, J.-H. "Robust and efficient saliency modeling from image co-occurrence histograms." IEEE Transactions on Pattern Analysis and Machine Intelligence 36, 195–201 (2014). dx.doi.org/10.1109/TPAMI.2013.158

Related Stories

Pixels guide the way for the visually impaired

February 28, 2013

(Phys.org)—Images have been transformed into pixels and projected onto a headset to help the visually impaired in everyday tasks such as navigation, route-planning and object finding.

Filtering web images effectively

June 25, 2012

You’re looking for a photo of a flower. Not just any photo—it needs to be horizontal in shape. And not just any flower—it needs to be a purple flower.

New method to magnify digital images is 700 times faster

June 21, 2013

Aránzazu Jurío-Munárriz, a graduate in computer engineering from the NUP/UPNA-Public University of Navarre, has in her PhD thesis presented new methods for improving two of the most widespread means used in digital image ...

Recommended for you

Your (social media) votes matter

January 24, 2017

When Tim Weninger conducted two large-scale experiments on Reddit - otherwise known as "the front page of the internet" - back in 2014, the goal was to better understand the ripple effects of malicious voting behavior and ...

Protective wear inspired by fish scales

January 24, 2017

They started with striped bass. Over a two-year period the researchers went through about 50 bass, puncturing or fracturing hundreds of fish scales under the microscope, to try to understand their properties and mechanics ...

'Droneboarding' takes off in Latvia

January 22, 2017

Skirted on all sides by snow-clad pine forests, Latvia's remote Lake Ninieris would be the perfect picture of winter tranquility—were it not for the huge drone buzzing like a swarm of angry bees as it zooms above the solid ...

Singapore 2G switchoff highlights digital divide

January 22, 2017

When Singapore pulls the plug on its 2G mobile phone network this year, thousands of people could be stuck without a signal—digital have-nots left behind by the relentless march of technology.

Making AI systems that see the world as humans do

January 19, 2017

A Northwestern University team developed a new computational model that performs at human levels on a standard intelligence test. This work is an important step toward making artificial intelligence systems that see and understand ...

0 comments

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.