Image processing: The human (still) beats the machine

Oct 31, 2011 By Emmanuel Barraud

(PhysOrg.com) -- A novel experiment conducted by researchers at Idiap Research Institute and Johns Hopkins University highlights some of the limitations of automatic image analysis systems. Their results were recently published in the early online edition of the Proceedings of the National Academy of Sciences.

Anyone with a relatively new digital camera has experienced it: the system that is supposed to automatically identify and smiles sometimes doesn’t work quite right. Patterns in a photo of a bookshelf or of leaves on a tree are often mistaken for faces.

Behind this nearly universal gadget are the results of years of “computer vision” research. When you frame a scene, the camera divides it into many small zones and tries to identify subtle differences in hue. A dark, vaguely horizontal band can indicate eyes and eyebrows – or the empty space above a series of books.

How can the camera make such glaring errors, mistakes that no human would ever commit? To try and grasp the mechanisms at work in the image analysis process, François Fleuret, Senior Scientist at EPFL and researcher at the Idiap research institute in Martigny, has developed, along with colleagues from Johns Hopkins University, a “simple” contest in which humans and machines compete. The experiment and its results have just been published in the advance online edition of PNAS ().

The candidates were presented with a series of small, square black and white images of random shapes, and asked to classify them into two “families,” discovering for themselves the classification criteria. For example, if one shape is inside another or if the two are side by side.

While the solution was often obvious for humans, who would understand the trick after just a few images, the computers frequently had to be shown several thousands of examples before reaching a satisfactory result. And even worse, one of the 24 puzzles couldn’t be figured out using machine analysis at all.

The two images on top belong to the 1st family; those below to the 2nd family. Humans quickly understand that the criterion is the position of the smaller shape; either it’s in the center of the other shape or it’s not.

“We should remember that humans have had decades of experiential learning, in which they’re perceiving dozens of images per second, not to mention their genetic background. The computers are basically “blank slates” in comparison,” Fleuret remarks. By simplifying the images as much as possible, the scientists wanted to identify the main weaknesses of machine learning. “What we found, in a general sense, was that humans jump immediately to a semantic level of image analysis,” he continues. “He or she will say which pair of images is more crowded than another pair, where the computer will compare, for example, numerical values associated with the pixel density in a given perimeter.”

The experiment gave the researchers a glimpse into the “black box” of how the intelligence of a supposedly self-taught machine develops. “It’s the first time that we have been able to precisely, and on an identical task, quantify and compare the performance of classical learning algorithms and humans,” adds Fleuret. The scientists were also able to confirm that the number and variety of measures made in the image, upon which learning depends, increased their success rate. “When classifying the image depends on the relative placement of shapes in the image, machine learning has a really hard time,” Fleuret comments. “This justifies the current trend in the field to invent algorithms that are designed to identify individual parts of the image and their relative position.”

The rapidity of the human brain, the fact that it can instantly “reconstruct” an entire object even when part of it is hidden, its ability to find connections between parameters that are extremely variable while taking into account the temporal dimension (clothing and gait, for example, instead of a face for recognizing a person) all give it a huge advantage over machines in the area of . At their own pace, however, electronic devices will continue to benefit from improving techniques and processor speeds to get even better at decoding the world.

Explore further: An android opera: Japan's Shibuya plots new era of robot music

More information: Comparing machines and humans on a visual categorization test, Published online before print October 17, 2011, doi:10.1073/pnas.1109168108

Provided by Ecole Polytechnique Federale de Lausanne

5 /5 (2 votes)

Related Stories

When robots learn from our mistakes

May 26, 2011

(PhysOrg.com) -- Robots typically acquire new capacities by imitation. Now, EPFL scientists are doing the inverse -- developing machines that can learn more rapidly and outperform humans by starting from failed ...

How your brain reacts to mistakes depends on your mindset

Sep 30, 2011

(Medical Xpress) -- “Whether you think you can or think you can't -- you're right,” said Henry Ford. A new study, to be published in an upcoming issue of Psychological Science, a journal of the Association for Ps ...

How Do We Recognize Faces?

Jun 21, 2011

How do we recognize a face? Do we pick out “local” features— an eye or a mouth— and extrapolate from there? Or do we take in the “global” configuration—facial structure, distance between ...

The surprising connection between two types of perception

Jun 14, 2011

(Medical Xpress) -- The brain is constantly changing as it perceives the outside world, processing and learning about everything it encounters. In a new study, which will be published in an upcoming issue of Psychological Sc ...

Recommended for you

New iPhones deliver big profits for Apple

4 minutes ago

The new big-screen iPhones helped propel Apple's profit and revenue in the past quarter, as the California tech giant delivered stronger-than-expected results.

Facebook sues law firms, claims fraud

1 hour ago

Facebook is suing several law firms that represented a man who claimed he owned half of the social network and was entitled to billions of dollars from the company and its CEO Mark Zuckerberg.

IBM 3Q disappoints as it sheds 'empty calories'

1 hour ago

IBM disappointed investors Monday, reporting weak revenue growth again and a big charge to shed its costly chipmaking division as the tech giant tries to steer its business toward cloud computing and social-mobile ...

User comments : 4

Adjust slider to filter visible comments by rank

Display comments: newest first

Noumenon
4.7 / 5 (50) Oct 31, 2011
We should remember that humans have had decades of experiential learning, in which theyre perceiving dozens of images per second, not to mention their genetic background.


Algorithmic "learning" is the wrong approach if comparing to humans, and it has nothing to do with genes. The human mind has had MILLIONS of years, not just "decades", to evolve mechanisms to process images. It's not all about "learning",.. the mind has built-in mechanisms that process images a-priori to consciousness.

Computers are no where near matching humans in image processing because the require for that is that humans have an inkling of how the mind functions,... which is WAY different from "algorithms".
krundoloss
5 / 5 (1) Oct 31, 2011
In order to function competitively in our world, humans have incredibly fast image processing. Imagine what is going on when you track a fly, or play air hockey. This has allowed us to see animals that are camouflaged and hunt them for food. We also have "smooth eye tracking" which allows our eyes to move instead of our whole head (like a cat). And we play video games and watch TV, drive cars, all strengthen our image processing abilities to a level greater than any other animal. A computer cannot compete with that, yet. Not to mention we are able to "understand" what we see, which is even further off for computers.
Isaacsname
not rated yet Oct 31, 2011
Woooo. I don't know about you guys/gals, but I have extreme visual snow/static, which when I read this:

" The rapidity of the human brain, the fact that it can instantly reconstruct an entire object even when part of it is hidden "

Made me smile somewhat, because I see heavy static on everything, I started looking it up a while back, found what's called " stochastic resonance in visual perception "

http://en.wikiped...biology)

http://www.youtub...duEEoCaA

Basically, it allows me to be able to see small details most people can't, so I wonder if somehow this could be used in any way for the problems described by the article in terms of object recognitions....
Deesky
5 / 5 (1) Oct 31, 2011
They speak of 'classical' learning algorithms, but I wonder how they would have gone had they used a vision system based on Jeff Hawkin's research (Numenta) implementing Hierarchical Temporal Memory (HTM) to simulate the neocortex.

Here's an interesting lecture: Advances in Modeling Neocortex and its Impact on Machine Intelligence:
http://www.youtub...iFOIbTkE