Sign language puzzle solved

Dec 15, 2009 by Lin Edwards weblog
Sign language
Two sign language interpreters working as a team for a school. Photo: Wikimedia Commons.

(PhysOrg.com) -- Scientists have known for 40 years that even though it takes longer to use sign language to sign individual words, sentences can be signed, on average, in the same time it takes to say them, but until now they have never understood how this could be possible.

Sign languages such as American (ASL) use hand gestures to indicate words, and are used by millions of deaf people around the world for communication. In American Sign Language every sign is made up a combination of hand gestures and handshapes. (The sign language for British English is quite different to ASL, and the two sign languages are not mutually intelligible.)

Scientists Andrew Chong and colleagues at Princeton University in New Jersey have been studying the empirical entropy and redundancy in American Sign Language handshapes to find an answer to the puzzle. The term entropy is used in the research as a measure of the average information content of a unit of data.

The fundamental unit of data of ASL is the handshape, while for spoken languages the fundamental units are phonemes. A handshape is a specific movement of the hand and specific location of the hand.

Their results show that the information contained in the 45 handshapes making up the American Sign Language is higher than the amount of information contained in phonemes. This means spoken English has more redundancy than the signed equivalent.

The researchers reached this conclusion by measuring the frequency of handshapes in videos of signing uploaded by deaf people to websites YouTube, DeafRead, and DeafVideo.tv, and videos of conversations in sign language recorded on campus. They discovered that the entropy (information content) of the handshapes averages at 0.5 bits per shape less than the theoretical maximum, while the per phoneme in speech is around three bits below the maximum possible.

This means that even though making the signs for words is slower, signers can keep up with speakers because the low redundancy rate compensates for the slower rate of signing.

Chong believes the signed language has less redundancy than the spoken language because less is needed. The redundancy in spoken language allows speech to be understood in a noisy environment, but Chong explains the "visual channel is less noisy than the auditory channel", so there is less chance of being misunderstood.

The researchers speculated that errors are dealt with differently in signing and speaking. If hand gestures are not understood, difficulties can be overcome by slowing the transition between them, but if speech is not understood speaking phonemes for longer times does not always solve the difficulty.

Understanding sign language and its information content is essential if automated sign recognition technology is to develop, and the language needs to be understood to allow sign language to be encoded and transmitted electronically by means other than video recordings.

Explore further: Coping with floods—of water and data

More information: Frequency of Occurrence and Information Entropy of American Sign Language, Andrew Chong, Lalitha Sankar, H. Vincent Poor (Princeton University); arXiv:0912.1768; arxiv.org/abs/0912.1768

add to favorites email to friend print save as pdf

Related Stories

Deaf children use hands to invent own way of communicating

Feb 15, 2009

Deaf children are able to develop a language-like gesture system by making up hand signs and using homemade systems to increase their communication as they grow, just as children with conventional spoken language, research ...

Sign language cell phone service created

Mar 06, 2007

The world's first sign language dictionary available from a mobile phone has been launched by the University of Bristol's Centre for Deaf Studies.

Study: Grammar ability hardwired in humans

Feb 07, 2006

University of Rochester scientists studying why characteristics of grammar are found in all languages say the use of grammar is hardwired in our brains.

When using gestures, rules of grammar remain the same

Jun 30, 2008

The mind apparently has a consistent way of ordering an event that defies the order in which subjects, verbs, and objects typically appear in languages, according to research at the University of Chicago.

Recommended for you

Coping with floods—of water and data

9 hours ago

Halloween 2013 brought real terror to an Austin, Texas, neighborhood, when a flash flood killed four residents and damaged roughly 1,200 homes. Following torrential rains, Onion Creek swept over its banks and inundated the ...

Cloud computing helps make sense of cloud forests

Dec 17, 2014

The forests that surround Campos do Jordao are among the foggiest places on Earth. With a canopy shrouded in mist much of time, these are the renowned cloud forests of the Brazilian state of São Paulo. It is here that researchers ...

User comments : 0

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.