The time it takes to reassemble the world

January 24, 2007
Modern human brain
Modern human brain. Image source: Univ. of Wisconsin-Madison Brain Collection.

A few glimpses are enough to perceive a seamless and richly detailed visual world. But instead of "photographic snapshots”, information about the color, shape and motion of an object is pulled apart and sent through individual nerve cells, or neurons, to the visual center in the brain. How the brain puts the scene back to together has been hotly debated ever since neurons were discovered over a century ago.

A novel experimental design allowed researchers at the Salk Institute for Biological Studies to scrutinize this process, called conjunction, stopwatch in hand. They found that individual features of an object are permanently joined together by a computational process that takes time, 1/100th of a second to be exact. Their findings are reported in the Jan. 24 issue of the Journal of Neuroscience.

"The question of how the brain integrates different signals is fundamental to our understanding of sensory processing, and a range of different theories have been advanced,” says John Reynolds, Ph.D., an assistant professor in the Systems Neurobiology Laboratory who led the study. "Our finding that a very small, but consistent, amount of time is required to compute a very simple conjunction is important because it places very tight limits on the amount of time that is available for the mechanisms that mediate this computation to operate.”

To measure the time required for integration, Clara Bodelón, Ph.D., a mathematician in Reynolds’ laboratory, painstakingly designed pairs of simple images—for example, a red vertical stripe pattern or a green horizontal pattern—which, when presented quickly enough, cancel one another and become invisible. (See [figure]).

After securing the last eight computer monitors in the world that could actually present the stimuli quick enough to exceed the limits of perception (newer LCD monitors don’t refresh the screen fast enough) and painstakingly calibrating the monitors to precisely control the activity of individual photoreceptors in the eye, the Salk researchers were ready to inch closer to answering an age-old and much-debated question: How do neurons communicate to give rise to our coherent perception of the world?

At very high presentation rates, the stimuli were literally invisible. But when Bodelón slowed the presentation rate, human observers could tell an image’s orientation. Interestingly, when presentation rate was lowered even further, the test subjects could distinguish color and orientation but were unable to say which image – the vertical or horizontal one – was red or green. In other words, the brain could "see” both form and color but could not see how they were combined.

Only after slowing presentation of the stimuli further could the observers accurately report color and orientation of individual objects, indicating that computing the overall meaning of all this visual input is a time-consuming process. Thus, the features of the stimulus were available to perception before they were "bound” together. Binding of features, however, required more time.

"Nobody knew whether a separate computation step was necessary to integrate individual attributes of objects and, if so, how long it would take,” explains Bodelón. "The fact that it takes time to reliably perceive the combination of color and orientation points to the existence of a distinct integration mechanism. We can now start to test different hypotheses about the nature of this mechanism,” she adds.

"The question how the brain synthesizes visual information is of tremendous importance from a basic science standpoint,” explains Reynolds and adds that "it also has important practical implications for understanding and ultimately treating disorders of perception, such as visual agnosia, a debilitating condition in which the patient cannot ‘see’ complex visual stimuli.”

By precisely measuring this fleeting visual computation, Bodelón and her colleagues have taken an important first step in understanding the mechanisms that fail in patients who suffer from this disorder.

Source: Salk Institute

Explore further: Image-tracking technology allows scientists to observe nature vs. nurture in neural stem cells

Related Stories

An insect eye for drones

August 12, 2015

Is it possible to catch a fly? Small insects seem to possess a sixth sense that allows them to dodge any threats. Yet there is no magic trick, but only a compound eye that is an organ of vision made of thousands of ommatidia. ...

What neuroscience can learn from computer science

August 10, 2015

What do computers and brains have in common? Computers are made to solve the same problems that brains solve. Computers, however, rely on a drastically different hardware, which makes them good at different kinds of problem ...

Recommended for you

How the finch changes its tune

August 3, 2015

Like top musicians, songbirds train from a young age to weed out errors and trim variability from their songs, ultimately becoming consistent and reliable performers. But as with human musicians, even the best are not machines. ...

Machine Translates Thoughts into Speech in Real Time

December 21, 2009

(PhysOrg.com) -- By implanting an electrode into the brain of a person with locked-in syndrome, scientists have demonstrated how to wirelessly transmit neural signals to a speech synthesizer. The "thought-to-speech" process ...

0 comments

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.