The time it takes to reassemble the world

January 24, 2007
Modern human brain
Modern human brain. Image source: Univ. of Wisconsin-Madison Brain Collection.

A few glimpses are enough to perceive a seamless and richly detailed visual world. But instead of "photographic snapshots”, information about the color, shape and motion of an object is pulled apart and sent through individual nerve cells, or neurons, to the visual center in the brain. How the brain puts the scene back to together has been hotly debated ever since neurons were discovered over a century ago.

A novel experimental design allowed researchers at the Salk Institute for Biological Studies to scrutinize this process, called conjunction, stopwatch in hand. They found that individual features of an object are permanently joined together by a computational process that takes time, 1/100th of a second to be exact. Their findings are reported in the Jan. 24 issue of the Journal of Neuroscience.

"The question of how the brain integrates different signals is fundamental to our understanding of sensory processing, and a range of different theories have been advanced,” says John Reynolds, Ph.D., an assistant professor in the Systems Neurobiology Laboratory who led the study. "Our finding that a very small, but consistent, amount of time is required to compute a very simple conjunction is important because it places very tight limits on the amount of time that is available for the mechanisms that mediate this computation to operate.”

To measure the time required for integration, Clara Bodelón, Ph.D., a mathematician in Reynolds’ laboratory, painstakingly designed pairs of simple images—for example, a red vertical stripe pattern or a green horizontal pattern—which, when presented quickly enough, cancel one another and become invisible. (See [figure]).

After securing the last eight computer monitors in the world that could actually present the stimuli quick enough to exceed the limits of perception (newer LCD monitors don’t refresh the screen fast enough) and painstakingly calibrating the monitors to precisely control the activity of individual photoreceptors in the eye, the Salk researchers were ready to inch closer to answering an age-old and much-debated question: How do neurons communicate to give rise to our coherent perception of the world?

At very high presentation rates, the stimuli were literally invisible. But when Bodelón slowed the presentation rate, human observers could tell an image’s orientation. Interestingly, when presentation rate was lowered even further, the test subjects could distinguish color and orientation but were unable to say which image – the vertical or horizontal one – was red or green. In other words, the brain could "see” both form and color but could not see how they were combined.

Only after slowing presentation of the stimuli further could the observers accurately report color and orientation of individual objects, indicating that computing the overall meaning of all this visual input is a time-consuming process. Thus, the features of the stimulus were available to perception before they were "bound” together. Binding of features, however, required more time.

"Nobody knew whether a separate computation step was necessary to integrate individual attributes of objects and, if so, how long it would take,” explains Bodelón. "The fact that it takes time to reliably perceive the combination of color and orientation points to the existence of a distinct integration mechanism. We can now start to test different hypotheses about the nature of this mechanism,” she adds.

"The question how the brain synthesizes visual information is of tremendous importance from a basic science standpoint,” explains Reynolds and adds that "it also has important practical implications for understanding and ultimately treating disorders of perception, such as visual agnosia, a debilitating condition in which the patient cannot ‘see’ complex visual stimuli.”

By precisely measuring this fleeting visual computation, Bodelón and her colleagues have taken an important first step in understanding the mechanisms that fail in patients who suffer from this disorder.

Source: Salk Institute

Explore further: Visionize uses virtual reality headsets to help people with low vision

Related Stories

Rejuvenating the comparative approach in modern neuroscience

July 20, 2015

65 years ago, the famed behavioral endocrinologist Frank Beach wrote an article in The American Psychologist entitled 'The Snark was a Boojum'. The title refers to Lewis Carroll's poem 'The Hunting of the Snark', in which ...

Is your fear of radiation irrational?

July 14, 2015

Bad Gastein in the Austrian Alps. It's 10am on a Wednesday in early March, cold and snowy – but not in the entrance to the main gallery of what was once a gold mine. Togged out in swimming trunks, flip-flops and a bath ...

Recommended for you

Machine Translates Thoughts into Speech in Real Time

December 21, 2009

(PhysOrg.com) -- By implanting an electrode into the brain of a person with locked-in syndrome, scientists have demonstrated how to wirelessly transmit neural signals to a speech synthesizer. The "thought-to-speech" process ...

Quantum Theory May Explain Wishful Thinking

April 14, 2009

(PhysOrg.com) -- Humans don’t always make the most rational decisions. As studies have shown, even when logic and reasoning point in one direction, sometimes we chose the opposite route, motivated by personal bias or simply ...

0 comments

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.