Researchers Give Computers Common Sense

Oct 17, 2007
The computer scientists injected context into an automated image labeling system through a post-processing context check. The approach strives to maximize the contextual agreement among the labeled objects within each picture.

Using a little-known Google Labs widget, computer scientists from UC San Diego and UCLA have brought common sense to an automated image labeling system. The common sense comes as the ability to use context to help identify objects in photographs.

For example, if a conventional automated object identifier has labeled a person, a tennis racket, a tennis court and a lemon in a photo, the new post-processing context check will re-label the lemon as a tennis ball.

“We think our paper is the first to bring external semantic context to the problem of object recognition,” said computer science professor Serge Belongie from UC San Diego.

The researchers show that the Google Labs tool called Google Sets can be used to provide external contextual information to automated object identifiers. The paper will be presented on Thursday 18 October 2007 at ICCV 2007 – the 11th IEEE International Conference on Computer Vision in Rio de Janeiro, Belongie.

Google Sets generates lists of related items or objects from just a few examples. If you type in John, Paul and George, it will return the words Ringo, Beatles and John Lennon. If you type “neon” and “argon” it will give you the rest of the noble gasses.

“In some ways, Google Sets is a proxy for common sense. In our paper, we showed that you can use this common sense to provide contextual information that improves the accuracy of automated image labeling systems,” said Belongie.

The image labeling system is a three step process. First, an automated system splits the image up into different regions through the process of image segmentation. In the photo above, image segmentation separates the person, the court, the racket and the yellow sphere.

Next, an automated system provides a ranked list of probable labels for each of these image regions.

Finally, the system adds a dose of context by processing all the different possible combinations of labels within the image and maximizing the contextual agreement among the labeled objects within each picture.

It is during this step that Google Sets can be used as a source of context that helps the system turn a lemon into a tennis ball. In this case, these “semantic context constraints” helped the system disambiguate between visually similar objects.

In another example, the researchers show that an object originally labeled as a cow is (correctly) re-labeled as a boat when the other objects in the image – sky, tree, building and water – are considered during the post-processing context step. In this case, the semantic context constraints helped to correct an entirely wrong image label. The context information came from co-occurence object information from the training data rather than from Google Sets.

The computer scientists also highlight other advances they bring to automated object identification. First, instead of doing just one image segmentation, the researchers generated a collection of image segmentations and put together a shortlist of stable image segmentations. This increases the accuracy of the segmentation process and provides an implicit shape description for each of the image regions.

Second, the researchers ran their object categorization model on each of the segmentations, rather than on individual pixels. This dramatically reduced the computational demands on the object categorization model.

In addition to Google Sets, the researchers gleaned semantic context information from the co-occurrence of object labels in the training sets.

In the two sets of images that the researchers tested, the categorization results improved considerably with inclusion of context. For one image dataset, the average categorization accuracy increased more than 10 percent using the semantic context provided by Google Sets. In a second dataset, the average categorization accuracy improved by about 2 percent using the semantic context provided by Google Sets. The improvements were higher when the researchers gleaned context information from data on co-occurrence of object labels in the training data set for the object identifier.

Right now, the researchers are exploring ways to extend context beyond the presence of objects in the same image. For example, they want to make explicit use of absolute and relative geometric relationships between objects in an image – such as “above” or “inside” relationships. This would mean that if a person were sitting on top of an animal, the system would consider the animal to be more likely a horse than a dog.

Source: University of California, San Diego

Explore further: UT Dallas professor to develop framework to protect computers' cores

add to favorites email to friend print save as pdf

Related Stories

SeeThru AR eyewear device sets sights on consumer market

Jan 18, 2014

(Phys.org) —By air, by sea, by workout trails, augmented reality headsets have just got more interesting with Laster Technologies' SeeThru eyewear. Laster recently launched its SeeThru campaign on Kickstarter, ...

Recommended for you

User comments : 4

Adjust slider to filter visible comments by rank

Display comments: newest first

Quantum_Conundrum
3.3 / 5 (3) Oct 17, 2007
This wont be succesful approach. Real life images do not always follow cookie-cutter outlines for context. In real life situations, this software will create about as many errors as it prevents due to prejudices in the developers definition of context.
earls
3.3 / 5 (3) Oct 17, 2007
Pattern Recognition = the future of AI. That's the only thing that seperates us from computers, the ability to rapidly analyze and identify patterns.
alexxx
2.7 / 5 (3) Oct 18, 2007
Google Image Labeler: http://images.goo...labeler/
fleem
4 / 5 (3) Oct 18, 2007
Yes yes yes this is all well and good. But it STILL does not answer the question of why that guy is playing tennis with a lemon.

More news stories

Ex-Apple chief plans mobile phone for India

Former Apple chief executive John Sculley, whose marketing skills helped bring the personal computer to desktops worldwide, says he plans to launch a mobile phone in India to exploit its still largely untapped ...

Airbnb rental site raises $450 mn

Online lodging listings website Airbnb inked a $450 million funding deal with investors led by TPG, a source close to the matter said Friday.

Health care site flagged in Heartbleed review

People with accounts on the enrollment website for President Barack Obama's signature health care law are being told to change their passwords following an administration-wide review of the government's vulnerability to the ...

A homemade solar lamp for developing countries

(Phys.org) —The solar lamp developed by the start-up LEDsafari is a more effective, safer, and less expensive form of illumination than the traditional oil lamp currently used by more than one billion people ...

NASA's space station Robonaut finally getting legs

Robonaut, the first out-of-this-world humanoid, is finally getting its space legs. For three years, Robonaut has had to manage from the waist up. This new pair of legs means the experimental robot—now stuck ...

Filipino tests negative for Middle East virus

A Filipino nurse who tested positive for the Middle East virus has been found free of infection in a subsequent examination after he returned home, Philippine health officials said Saturday.

Egypt archaeologists find ancient writer's tomb

Egypt's minister of antiquities says a team of Spanish archaeologists has discovered two tombs in the southern part of the country, one of them belonging to a writer and containing a trove of artifacts including reed pens ...