Measured -- The time it takes us to find the words we need

Nov 23, 2009

( -- The time it takes for our brains to search for and retrieve the word we want to say has been measured for the first time. The discovery is reported in a paper published in the Proceedings of the National Academy of Sciences of the USA today.

Most people think that words and are the two sides of the same coin and that the form of a word is the same as its meaning, or at least, that word and meaning cannot be split. However, this is not the case. Word forms have an existence of their own in the human mind, disconnected, from meaning- at least, for a fraction of a second.

Until now, in the field of production, it was unknown when exactly a word form is retrieved by the human when, for instance, people have to name a picture.

As Professor Guillaume Thierry of Bangor University, one of the paper's authors explains:

"If you have to say the word apple upon seeing the picture of an apple, the brain does not access the word form "a-p-p-l-e" instantly, it takes time, and until now, it was unknown exactly how much time it took. Along with colleagues at Pompeau Fabra and Barcelona universities, we measured exactly when word forms are retrieved by the brain. That happens about one fifth of a second after a picture is shown."

Thierry explains: "This is a very short time, but it makes a lot of sense if one considers that the average normal speech rate is about 5 words per second. Surely, if we can produce five per second in normal speech, it means that we can dig each and every word from memory in about one fifth of a second."

Thierry and colleagues hope to understand every stage of word production: analysis of meaning, word access, word retrieval and programming of speech. They also intend to do the same thing in comprehension to reach a full understand of the stages the human mind goes through to understand and produce language.

Their experiment combined picture naming and a technique which measures electrical activity produced by the brain over the scalp. It also pioneered the recording of brain activity over the scalp, while participants spoke out loud. This proved a technical challenge as mouth movements produce electrical noise stronger than the power of signals produced by the brain.

The research is the fruit of collaboration between language laboratories in Barcelona Pompeau Fabra and Bangor universities.

More information: The time course of word retrieval revealed by event-related brain potentials during overt speech. Albert Costa, et at., PNAS. (PNAS Online Early Edition November 23-27, 2009).

Provided by Bangor University (news : web)

Explore further: In funk music, rhythmic complexity influences dancing desire

add to favorites email to friend print save as pdf

Related Stories

True or false? How our brain processes negative statements

Feb 11, 2009

Every day we are confronted with positive and negative statements. By combining the new, incoming information with what we already know, we are usually able to figure out if the statement is true or false. Previous research ...

Brain recognises verbal 'Oh-shit' wave

Nov 04, 2008

It seems that our brain can correct speech errors in the same way that it controls other forms of behaviour. Niels Schiller and Lesya Ganushchak, NWO researchers in Leiden, made this discovery while studying how the brain ...

Brain fends off distractions

Mar 20, 2007

Dutch researcher Harm Veling has demonstrated that our brains fend off distractions. If we are busy with something we suppress disrupting external influences. If we are tired, we can no longer do this.

Recommended for you

Screenagers face troubling addictions from an early age

16 hours ago

In 1997, Douglas Rushkoff boldly predicted the emergence a new caste of tech-literate adolescents. He argued that the children of his day would soon blossom into "screenagers", endowed with effortless advantages over their parents, ...

Better memory at ideal temperature

17 hours ago

People's working memory functions better if they are working in an ambient temperature where they feel most comfortable. That is what Leiden psychologists Lorenza Colzato and Roberta Sellaro conclude after having conducted ...

User comments : 1

Adjust slider to filter visible comments by rank

Display comments: newest first

1 / 5 (1) Nov 25, 2009
This model is too simple. The picture of an (real world) object acts as an input signal which triggers not just one (the correct) word, but triggers a whole lot of associations of several degrees (associations of associations) which multiply for every language the subject is acquainted with. Out of this (seething) pile of words and partial words the brain somehow manages to filter the one with the highest weight. Most of the time, that is. In a fifth of a second.

Of course, computers can be faster. But they don't have to parse the universe of associations a middle-aged human being has acquired. They can't even translate a poem.

More news stories

Down's chromosome cause genome-wide disruption

The extra copy of Chromosome 21 that causes Down's syndrome throws a spanner into the workings of all the other chromosomes as well, said a study published Wednesday that surprised its authors.

Simplicity is key to co-operative robots

A way of making hundreds—or even thousands—of tiny robots cluster to carry out tasks without using any memory or processing power has been developed by engineers at the University of Sheffield, UK.

Progress in the fight against quantum dissipation

( —Scientists at Yale have confirmed a 50-year-old, previously untested theoretical prediction in physics and improved the energy storage time of a quantum switch by several orders of magnitude. ...