Learning about brains from computers, and vice versa

February 15, 2008

For many years, Tomaso Poggio’s lab at MIT ran two parallel lines of research. Some projects were aimed at understanding how the brain works, using complex computational models. Others were aimed at improving the abilities of computers to perform tasks that our brains do with ease, such as making sense of complex visual images.

But recently Poggio has found that the work has progressed so far, and the two tasks have begun to overlap to such a degree, that it’s now time to combine the two lines of research.

He’ll describe his lab’s change in approach, and the research that led up to it, at the American Association for the Advancement of Science annual meeting in Boston, on Saturday, Feb. 16.

The turning point came last year, when Poggio and his team were working on a computer model designed to figure out how the brain processes certain kinds of visual information. As a test of the vision theory they were developing, they tried using the model vision system to actually interpret a series of photographs. Although the model had not been developed for that purpose—it was just supposed to be a theoretical analysis of how certain pathways in the brain work—it turned out to be as good as, or even better than, the best existing computer-vision systems, and as good as humans, at rapidly recognizing certain kinds of complex scenes.

“This is the first time a model has been able to reproduce human behavior on that kind of task,” says Poggio, the Eugene McDermott Professor in MIT’s Department of Brain and Cognitive Sciences and Computer Science and Artificial Intelligence Laboratory.

As a result, “My perspective changed in a dramatic way,” Poggio says. “It meant that we may be closer to understanding how the visual cortex recognizes objects and scenes than I ever thought possible.”

The experiments involved a task that is easy for people, but very hard for computer vision systems: recognizing whether or not there were any animals present in photos that ranged from relatively simple close-ups to complex landscapes with a great variety of detail. It’s a very complex task, since “animals” can include anything from snakes to butterflies to cattle, against a background that might include distracting trees or buildings. People were shown the scenes for just a fraction of a second, a task that uses a particular part of the human visual cortex, known as the Ventral 1 pathway, to recognize what is seen.

The visual cortex is a large part of the brain’s processing system, and one of the most complex, so reaching an understanding of how it works could be a significant step toward understanding how the whole brain works—one of the greatest problems in science today.

“Computational models are beginning to provide powerful new insights into the key problem of how the brain works,” says Poggio, who is also co-director of the Center for Biological and Computational Learning and an investigator at the McGovern Institute for Brain Research at MIT.

Although the model Poggio and his team developed produces surprisingly good results, “we do not quite understand why the model works as well as it does,” he says. They are now working on developing a comprehensive theory of vision that can account for these and other recent results from the lab.

“Our visual abilities are computationally amazing, and we are still far from imitating them with computers,” Poggio says. But the new work shows that it may be time for researchers in artificial intelligence to start paying close attention to the latest developments in neuroscience, he says.

Source: Massachusetts Institute of Technology

Explore further: Neural networks explained

Related Stories

Neural networks explained

April 17, 2017

In the past 10 years, the best-performing artificial-intelligence systems—such as the speech recognizers on smartphones or Google's latest automatic translator—have resulted from a technique called "deep learning."

How the brain recognizes objects

June 7, 2010

(PhysOrg.com) -- Researchers at MIT's McGovern Institute for Brain Research have developed a new mathematical model to describe how the human brain visually identifies objects. The model accurately predicts human performance ...

Mimicking How the Brain Recognizes Street Scenes

February 6, 2007

At last, neuroscience is having an impact on computer science and artificial intelligence (AI). For the first time, scientists in Tomaso Poggio’s laboratory at the McGovern Institute for Brain Research at MIT applied a ...

Brain waves pattern themselves after rhythms of nature

February 15, 2008

The same rules of physics that govern molecules as they condense from gas to liquid, or freeze from liquid to solid, also apply to the activity patterns of neurons in the human brain. University of Chicago mathematician Jack ...

Recommended for you

Humans in America '115,000 years earlier than thought'

April 26, 2017

High-tech dating of mastodon remains found in southern California has shattered the timeline of human migration to America, pushing the presence of hominins back to 130,000 years ago rather than just 15,000 years, researchers ...

Upward mobility has fallen sharply in US: study

April 24, 2017

In a sign of the fading American Dream, 92 percent of children born in 1940 earned more than their parents, but only half of those born in 1984 can say the same, researchers said Monday.

0 comments

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.