Learning about brains from computers, and vice versa

Feb 15, 2008

For many years, Tomaso Poggio’s lab at MIT ran two parallel lines of research. Some projects were aimed at understanding how the brain works, using complex computational models. Others were aimed at improving the abilities of computers to perform tasks that our brains do with ease, such as making sense of complex visual images.

But recently Poggio has found that the work has progressed so far, and the two tasks have begun to overlap to such a degree, that it’s now time to combine the two lines of research.

He’ll describe his lab’s change in approach, and the research that led up to it, at the American Association for the Advancement of Science annual meeting in Boston, on Saturday, Feb. 16.

The turning point came last year, when Poggio and his team were working on a computer model designed to figure out how the brain processes certain kinds of visual information. As a test of the vision theory they were developing, they tried using the model vision system to actually interpret a series of photographs. Although the model had not been developed for that purpose—it was just supposed to be a theoretical analysis of how certain pathways in the brain work—it turned out to be as good as, or even better than, the best existing computer-vision systems, and as good as humans, at rapidly recognizing certain kinds of complex scenes.

“This is the first time a model has been able to reproduce human behavior on that kind of task,” says Poggio, the Eugene McDermott Professor in MIT’s Department of Brain and Cognitive Sciences and Computer Science and Artificial Intelligence Laboratory.

As a result, “My perspective changed in a dramatic way,” Poggio says. “It meant that we may be closer to understanding how the visual cortex recognizes objects and scenes than I ever thought possible.”

The experiments involved a task that is easy for people, but very hard for computer vision systems: recognizing whether or not there were any animals present in photos that ranged from relatively simple close-ups to complex landscapes with a great variety of detail. It’s a very complex task, since “animals” can include anything from snakes to butterflies to cattle, against a background that might include distracting trees or buildings. People were shown the scenes for just a fraction of a second, a task that uses a particular part of the human visual cortex, known as the Ventral 1 pathway, to recognize what is seen.

The visual cortex is a large part of the brain’s processing system, and one of the most complex, so reaching an understanding of how it works could be a significant step toward understanding how the whole brain works—one of the greatest problems in science today.

“Computational models are beginning to provide powerful new insights into the key problem of how the brain works,” says Poggio, who is also co-director of the Center for Biological and Computational Learning and an investigator at the McGovern Institute for Brain Research at MIT.

Although the model Poggio and his team developed produces surprisingly good results, “we do not quite understand why the model works as well as it does,” he says. They are now working on developing a comprehensive theory of vision that can account for these and other recent results from the lab.

“Our visual abilities are computationally amazing, and we are still far from imitating them with computers,” Poggio says. But the new work shows that it may be time for researchers in artificial intelligence to start paying close attention to the latest developments in neuroscience, he says.

Source: Massachusetts Institute of Technology

Explore further: Physicist creates ice cream that changes colors as it's licked

add to favorites email to friend print save as pdf

Related Stories

How the brain recognizes objects

Jun 07, 2010

(PhysOrg.com) -- Researchers at MIT's McGovern Institute for Brain Research have developed a new mathematical model to describe how the human brain visually identifies objects. The model accurately predicts ...

Brain waves pattern themselves after rhythms of nature

Feb 15, 2008

The same rules of physics that govern molecules as they condense from gas to liquid, or freeze from liquid to solid, also apply to the activity patterns of neurons in the human brain. University of Chicago ...

Mimicking How the Brain Recognizes Street Scenes

Feb 06, 2007

At last, neuroscience is having an impact on computer science and artificial intelligence (AI). For the first time, scientists in Tomaso Poggio’s laboratory at the McGovern Institute for Brain Research at MIT applied a ...

Recommended for you

F1000Research brings static research figures to life

Jul 30, 2014

F1000Research today published new research from Bjorn Brembs, professor of neurogenetics at the Institute of Zoology, Universitaet Regensburg, in Germany, with a proof-of-concept figure allowing readers and reviewers to run ...

How science can beat the flawed metric that rules it

Jul 30, 2014

In order to improve something, we need to be able to measure its quality. This is true in public policy, in commercial industries, and also in science. Like other fields, science has a growing need for quantitative ...

User comments : 0