Image processing: The human (still) beats the machine

October 31, 2011 By Emmanuel Barraud, Ecole Polytechnique Federale de Lausanne

( -- A novel experiment conducted by researchers at Idiap Research Institute and Johns Hopkins University highlights some of the limitations of automatic image analysis systems. Their results were recently published in the early online edition of the Proceedings of the National Academy of Sciences.

Anyone with a relatively new digital camera has experienced it: the system that is supposed to automatically identify and smiles sometimes doesn’t work quite right. Patterns in a photo of a bookshelf or of leaves on a tree are often mistaken for faces.

Behind this nearly universal gadget are the results of years of “computer vision” research. When you frame a scene, the camera divides it into many small zones and tries to identify subtle differences in hue. A dark, vaguely horizontal band can indicate eyes and eyebrows – or the empty space above a series of books.

How can the camera make such glaring errors, mistakes that no human would ever commit? To try and grasp the mechanisms at work in the image analysis process, François Fleuret, Senior Scientist at EPFL and researcher at the Idiap research institute in Martigny, has developed, along with colleagues from Johns Hopkins University, a “simple” contest in which humans and machines compete. The experiment and its results have just been published in the advance online edition of PNAS ().

The candidates were presented with a series of small, square black and white images of random shapes, and asked to classify them into two “families,” discovering for themselves the classification criteria. For example, if one shape is inside another or if the two are side by side.

While the solution was often obvious for humans, who would understand the trick after just a few images, the computers frequently had to be shown several thousands of examples before reaching a satisfactory result. And even worse, one of the 24 puzzles couldn’t be figured out using machine analysis at all.

The two images on top belong to the 1st family; those below to the 2nd family. Humans quickly understand that the criterion is the position of the smaller shape; either it’s in the center of the other shape or it’s not.
“We should remember that humans have had decades of experiential learning, in which they’re perceiving dozens of images per second, not to mention their genetic background. The computers are basically “blank slates” in comparison,” Fleuret remarks. By simplifying the images as much as possible, the scientists wanted to identify the main weaknesses of machine learning. “What we found, in a general sense, was that humans jump immediately to a semantic level of image analysis,” he continues. “He or she will say which pair of images is more crowded than another pair, where the computer will compare, for example, numerical values associated with the pixel density in a given perimeter.”

The experiment gave the researchers a glimpse into the “black box” of how the intelligence of a supposedly self-taught machine develops. “It’s the first time that we have been able to precisely, and on an identical task, quantify and compare the performance of classical learning algorithms and humans,” adds Fleuret. The scientists were also able to confirm that the number and variety of measures made in the image, upon which learning depends, increased their success rate. “When classifying the image depends on the relative placement of shapes in the image, machine learning has a really hard time,” Fleuret comments. “This justifies the current trend in the field to invent algorithms that are designed to identify individual parts of the image and their relative position.”

The rapidity of the human brain, the fact that it can instantly “reconstruct” an entire object even when part of it is hidden, its ability to find connections between parameters that are extremely variable while taking into account the temporal dimension (clothing and gait, for example, instead of a face for recognizing a person) all give it a huge advantage over machines in the area of . At their own pace, however, electronic devices will continue to benefit from improving techniques and processor speeds to get even better at decoding the world.

Explore further: When robots learn from our mistakes

More information: Comparing machines and humans on a visual categorization test, Published online before print October 17, 2011, doi:10.1073/pnas.1109168108

Related Stories

When robots learn from our mistakes

May 26, 2011

( -- Robots typically acquire new capacities by imitation. Now, EPFL scientists are doing the inverse -- developing machines that can learn more rapidly and outperform humans by starting from failed or inaccurate ...

How your brain reacts to mistakes depends on your mindset

September 30, 2011

(Medical Xpress) -- “Whether you think you can or think you can't -- you're right,” said Henry Ford. A new study, to be published in an upcoming issue of Psychological Science, a journal of the Association for Psychological ...

How Do We Recognize Faces?

June 21, 2011

How do we recognize a face? Do we pick out “local” features— an eye or a mouth— and extrapolate from there? Or do we take in the “global” configuration—facial structure, distance between ...

The surprising connection between two types of perception

June 14, 2011

(Medical Xpress) -- The brain is constantly changing as it perceives the outside world, processing and learning about everything it encounters. In a new study, which will be published in an upcoming issue of Psychological ...

Recommended for you

Squid could provide an eco-friendly alternative to plastics

February 21, 2019

The remarkable properties of a recently-discovered squid protein could revolutionize materials in a way that would be unattainable with conventional plastic, finds a review published in Frontiers in Chemistry. Originating ...

Female golden snub-nosed monkeys share nursing of young

February 21, 2019

An international team of researchers including The University of Western Australia and China's Central South University of Forestry and Technology has discovered that female golden snub-nosed monkeys in China are happy to ...

When does one of the central ideas in economics work?

February 20, 2019

The concept of equilibrium is one of the most central ideas in economics. It is one of the core assumptions in the vast majority of economic models, including models used by policymakers on issues ranging from monetary policy ...

In colliding galaxies, a pipsqueak shines bright

February 20, 2019

In the nearby Whirlpool galaxy and its companion galaxy, M51b, two supermassive black holes heat up and devour surrounding material. These two monsters should be the most luminous X-ray sources in sight, but a new study using ...


Adjust slider to filter visible comments by rank

Display comments: newest first

4.7 / 5 (50) Oct 31, 2011
We should remember that humans have had decades of experiential learning, in which theyre perceiving dozens of images per second, not to mention their genetic background.

Algorithmic "learning" is the wrong approach if comparing to humans, and it has nothing to do with genes. The human mind has had MILLIONS of years, not just "decades", to evolve mechanisms to process images. It's not all about "learning",.. the mind has built-in mechanisms that process images a-priori to consciousness.

Computers are no where near matching humans in image processing because the require for that is that humans have an inkling of how the mind functions,... which is WAY different from "algorithms".
5 / 5 (1) Oct 31, 2011
In order to function competitively in our world, humans have incredibly fast image processing. Imagine what is going on when you track a fly, or play air hockey. This has allowed us to see animals that are camouflaged and hunt them for food. We also have "smooth eye tracking" which allows our eyes to move instead of our whole head (like a cat). And we play video games and watch TV, drive cars, all strengthen our image processing abilities to a level greater than any other animal. A computer cannot compete with that, yet. Not to mention we are able to "understand" what we see, which is even further off for computers.
not rated yet Oct 31, 2011
Woooo. I don't know about you guys/gals, but I have extreme visual snow/static, which when I read this:

" The rapidity of the human brain, the fact that it can instantly reconstruct an entire object even when part of it is hidden "

Made me smile somewhat, because I see heavy static on everything, I started looking it up a while back, found what's called " stochastic resonance in visual perception "



Basically, it allows me to be able to see small details most people can't, so I wonder if somehow this could be used in any way for the problems described by the article in terms of object recognitions....
5 / 5 (1) Oct 31, 2011
They speak of 'classical' learning algorithms, but I wonder how they would have gone had they used a vision system based on Jeff Hawkin's research (Numenta) implementing Hierarchical Temporal Memory (HTM) to simulate the neocortex.

Here's an interesting lecture: Advances in Modeling Neocortex and its Impact on Machine Intelligence:

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.