Sign language puzzle solved

December 15, 2009 by Lin Edwards weblog
Sign language
Two sign language interpreters working as a team for a school. Photo: Wikimedia Commons.

(PhysOrg.com) -- Scientists have known for 40 years that even though it takes longer to use sign language to sign individual words, sentences can be signed, on average, in the same time it takes to say them, but until now they have never understood how this could be possible.

Sign languages such as American (ASL) use hand gestures to indicate words, and are used by millions of deaf people around the world for communication. In American Sign Language every sign is made up a combination of hand gestures and handshapes. (The sign language for British English is quite different to ASL, and the two sign languages are not mutually intelligible.)

Scientists Andrew Chong and colleagues at Princeton University in New Jersey have been studying the empirical entropy and redundancy in American Sign Language handshapes to find an answer to the puzzle. The term entropy is used in the research as a measure of the average information content of a unit of data.

The fundamental unit of data of ASL is the handshape, while for spoken languages the fundamental units are phonemes. A handshape is a specific movement of the hand and specific location of the hand.

Their results show that the information contained in the 45 handshapes making up the American Sign Language is higher than the amount of information contained in phonemes. This means spoken English has more redundancy than the signed equivalent.

The researchers reached this conclusion by measuring the frequency of handshapes in videos of signing uploaded by deaf people to websites YouTube, DeafRead, and DeafVideo.tv, and videos of conversations in sign language recorded on campus. They discovered that the entropy (information content) of the handshapes averages at 0.5 bits per shape less than the theoretical maximum, while the per phoneme in speech is around three bits below the maximum possible.

This means that even though making the signs for words is slower, signers can keep up with speakers because the low redundancy rate compensates for the slower rate of signing.

Chong believes the signed language has less redundancy than the spoken language because less is needed. The redundancy in spoken language allows speech to be understood in a noisy environment, but Chong explains the "visual channel is less noisy than the auditory channel", so there is less chance of being misunderstood.

The researchers speculated that errors are dealt with differently in signing and speaking. If hand gestures are not understood, difficulties can be overcome by slowing the transition between them, but if speech is not understood speaking phonemes for longer times does not always solve the difficulty.

Understanding sign language and its information content is essential if automated sign recognition technology is to develop, and the language needs to be understood to allow sign language to be encoded and transmitted electronically by means other than video recordings.

Explore further: Sign language study reveals key finding about short-term memory

More information: Frequency of Occurrence and Information Entropy of American Sign Language, Andrew Chong, Lalitha Sankar, H. Vincent Poor (Princeton University); arXiv:0912.1768; arxiv.org/abs/0912.1768

Related Stories

Sign language cell phone service created

March 6, 2007

The world's first sign language dictionary available from a mobile phone has been launched by the University of Bristol's Centre for Deaf Studies.

Deaf children use hands to invent own way of communicating

February 15, 2009

Deaf children are able to develop a language-like gesture system by making up hand signs and using homemade systems to increase their communication as they grow, just as children with conventional spoken language, research ...

Recommended for you

Smallest 3-D camera offers brain surgery innovation

August 28, 2015

To operate on the brain, doctors need to see fine details on a small scale. A tiny camera that could produce 3-D images from inside the brain would help surgeons see more intricacies of the tissue they are handling and lead ...

Smart home heating and cooling

August 28, 2015

Smart temperature-control devices—such as thermostats that learn and adjust to pre-programmed temperatures—are poised to increase comfort and save energy in homes.

Team creates functional ultrathin solar cells

August 27, 2015

(Phys.org)—A team of researchers with Johannes Kepler University Linz in Austria has developed an ultrathin solar cell for use in lightweight and flexible applications. In their paper published in the journal Nature Materials, ...

0 comments

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.