Measured -- The time it takes us to find the words we need

Nov 23, 2009

(PhysOrg.com) -- The time it takes for our brains to search for and retrieve the word we want to say has been measured for the first time. The discovery is reported in a paper published in the Proceedings of the National Academy of Sciences of the USA today.

Most people think that words and are the two sides of the same coin and that the form of a word is the same as its meaning, or at least, that word and meaning cannot be split. However, this is not the case. Word forms have an existence of their own in the human mind, disconnected, from meaning- at least, for a fraction of a second.

Until now, in the field of production, it was unknown when exactly a word form is retrieved by the human when, for instance, people have to name a picture.

As Professor Guillaume Thierry of Bangor University, one of the paper's authors explains:

"If you have to say the word apple upon seeing the picture of an apple, the brain does not access the word form "a-p-p-l-e" instantly, it takes time, and until now, it was unknown exactly how much time it took. Along with colleagues at Pompeau Fabra and Barcelona universities, we measured exactly when word forms are retrieved by the brain. That happens about one fifth of a second after a picture is shown."

Thierry explains: "This is a very short time, but it makes a lot of sense if one considers that the average normal speech rate is about 5 words per second. Surely, if we can produce five per second in normal speech, it means that we can dig each and every word from memory in about one fifth of a second."

Thierry and colleagues hope to understand every stage of word production: analysis of meaning, word access, word retrieval and programming of speech. They also intend to do the same thing in comprehension to reach a full understand of the stages the human mind goes through to understand and produce language.

Their experiment combined picture naming and a technique which measures electrical activity produced by the brain over the scalp. It also pioneered the recording of brain activity over the scalp, while participants spoke out loud. This proved a technical challenge as mouth movements produce electrical noise stronger than the power of signals produced by the brain.

The research is the fruit of collaboration between language laboratories in Barcelona Pompeau Fabra and Bangor universities.

More information: The time course of word retrieval revealed by event-related brain potentials during overt speech. Albert Costa, et at., PNAS. (PNAS Online Early Edition November 23-27, 2009).

Provided by Bangor University (news : web)

Explore further: Echolocation acts as substitute sense for blind people

add to favorites email to friend print save as pdf

Related Stories

True or false? How our brain processes negative statements

Feb 11, 2009

Every day we are confronted with positive and negative statements. By combining the new, incoming information with what we already know, we are usually able to figure out if the statement is true or false. Previous research ...

Brain recognises verbal 'Oh-shit' wave

Nov 04, 2008

It seems that our brain can correct speech errors in the same way that it controls other forms of behaviour. Niels Schiller and Lesya Ganushchak, NWO researchers in Leiden, made this discovery while studying how the brain ...

Brain fends off distractions

Mar 20, 2007

Dutch researcher Harm Veling has demonstrated that our brains fend off distractions. If we are busy with something we suppress disrupting external influences. If we are tired, we can no longer do this.

Recommended for you

Echolocation acts as substitute sense for blind people

5 hours ago

Recent research carried out by scientists at Heriot-Watt University has demonstrated that human echolocation operates as a viable 'sense', working in tandem with other senses to deliver information to people with visual impairment.

User comments : 1

Adjust slider to filter visible comments by rank

Display comments: newest first

frajo
1 / 5 (1) Nov 25, 2009
This model is too simple. The picture of an (real world) object acts as an input signal which triggers not just one (the correct) word, but triggers a whole lot of associations of several degrees (associations of associations) which multiply for every language the subject is acquainted with. Out of this (seething) pile of words and partial words the brain somehow manages to filter the one with the highest weight. Most of the time, that is. In a fifth of a second.

Of course, computers can be faster. But they don't have to parse the universe of associations a middle-aged human being has acquired. They can't even translate a poem.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.