Kinect Sign Language Translator expands communication possibilities for the deaf

Nov 01, 2013
Kinect Sign Language Translator expands communication possibilities for the deaf
Students of the special education school at Beijing Union University try out the Kinect Sign Language Translator prototype.

Worldwide, an estimated 360 million people are deaf or hard of hearing. Because the majority of hearing individuals do not understand sign language, people who are deaf often have difficulties interacting with the hearing. While other methods exist, researchers hope to make translation even easier with a cost-effective, efficient prototype that translates sign language into spoken language—and spoken language into sign language—in real time.

Dedicated researchers in China have created the Kinect Sign Language Translator, a prototype system that understands the gestures of and converts them to spoken and written language—and vice versa. The system captures a conversation from both sides: the person is shown signing, with a written and spoken translation being rendered in real-time, while the system takes the hearing person's spoken words and turns them into accurate, understandable signs.

This project was a result of collaboration, facilitated by Microsoft Research Connections, between the Chinese Academy of Sciences, Beijing Union University, and Microsoft Research Asia, each of which made crucial contributions.

Professor Xilin Chen, deputy director of the Institute of Computing Technology at the Chinese Academy of Sciences, has spent much of the past decade studying sign language recognition, hoping to devise a way to enable signed communication between people with hearing loss and their hearing neighbors. "We knew that information technology, especially computer technology, has grown up very fast. So from my point of view, I thought this is the right time to develop some technology to help [the deaf community]. That's the motivation," Chen explained.

Enter Microsoft Kinect

Motivation met action when Kinect for Xbox came on the scene. Originally developed for gaming, the Kinect's sensors read a user's body position and movements and, with the help of a computer, translate them into commands. It thus has tremendous potential for understanding the complex gestures that make up sign language and for translating the signs into spoken or written words and sentences.

Kinect Sign Language Translator expands communication possibilities for the deaf
The Kinect sensor (under the monitor) captures the signer's movements so that the system can translate them into spoken language.

The November 2010 release of Kinect stirred tremendous interest in the research community. That interest intensified with the June 2011 release of the Microsoft-supported Kinect for Windows software development kit (SDK), which helped make the technology broadly available for scientific use. Microsoft Research Connections was eager to encourage the most promising uses of Kinect, but with so much fervor over Kinect in the research world, it was hard to select which projects to support.

Stewart Tansley, director of Natural User Interface at Microsoft Research Connections, turned to Microsoft Research's worldwide labs, asking them to submit the best Kinect academic collaborations they had under consideration. Microsoft Research Asia submitted the work of Principal Researcher Ming Zhou—who was heavily involved in natural language models and translation and had forged a tight collaboration with the Chinese Academy of Sciences. The project was just what Microsoft Research Connections was looking for.

This video is not supported by your browser at this time.
Opening new doors of communication for sign language users.

Complementing Chen's group at the Chinese Academy of Sciences were Zhou and other senior researchers from Microsoft Research Asia, where a great deal of automated translation work was already underway, including advanced research into real-time machine translations of English to Mandarin.

Also essential to this project was the participation of the special education program at Beijing Union University. "One unique contribution of this project is that it is a joint effort between software researchers and the deaf and hard of hearing," Zhou says. "A group of teachers and students from Beijing Union University joined this project, and this enabled tests of our algorithms to be conducted on real-world data."

Testing the system

Among the student participants was Dandan Yin, a dynamic, accomplished young woman who is deaf. An especially proficient and graceful signer, Yin told the researcher team that working on this project was the fulfillment of her childhood dream "to create a machine for people with hearing impairments."

Watching Yin sign back and forth with an avatar, you can see the potential future of communication between people who are deaf and those who can hear. And while all the collaborators are quick to stress that the Kinect Sign Language Translator is a prototype, not a finished product, all are equally vocal in expressing their belief that it has the potential to provide a cost-effective and efficient means of communication between those who are fluent in sign language and those whose signing is limited to crude gestures.  

Tansley conjures up the scenario of a deaf person visiting a physician who doesn't know sign language. While acknowledging that the patient could pre-schedule an interpreter or resort to communicating with paper and pen, he observes that such interactions "…would be very artificial. But with this technology, they could simply use their natural sign language." Thus a signer would be empowered to communicate independently with a non-signer without scheduling an interpreter or resorting to other methods.

Tansley relays that the system could even open up new job opportunities for deaf people. "Imagine an information kiosk, say, at an airport, and rather than the person seeking information being deaf, imagine that the person staffing the information kiosk was deaf. Now, a hearing person could come to that kiosk and ask questions of the deaf person and wouldn't have to understand or use sign language…the system could help them communicate."

Those scenarios don't seem too far off, thanks to the dedicated researchers and partners who are working to make the Kinect Sign Language Translator a reality—and, in the process, fulfilling the childhood dream of Dandan Yin and millions of other deaf and hard-of-hearing people in China and around the world.

Explore further: Skiers have reservations over airbag safety system

Related Stories

Early exposure to language for deaf children

Jun 05, 2012

(Medical Xpress) -- Most agree that the earlier you expose a child to a language, the easier it is for that child to pick it up. The same rules apply for deaf children.

Recommended for you

How will Google, Apple shake up car insurance industry?

11 hours ago

Car insurance industry, meet potential disrupters Google and Apple. Currently, nearly all mainstream insurers that offer driver-monitoring programs use relatively expensive devices that plug into a portal under the dashboard. ...

Cyclist's helmet, Volvo car to communicate for safety

Dec 21, 2014

Volvo calls it "a life-saving wearable cycling tech concept." The car maker is referring to a connected car and helmet prototype that enables two-way communication between Volvo drivers and cyclists for proximity ...

California puzzles over safety of driverless cars

Dec 21, 2014

California's Department of Motor Vehicles will miss a year-end deadline to adopt new rules for cars of the future because regulators first have to figure out how they'll know whether "driverless" vehicles ...

Cadillac CT6 will get streaming video mirror

Dec 20, 2014

Cadillac said Thursday it will add high resolution streaming video to the function of a rearview mirror, so that the driver's vision and safety can be enhanced. The technology will debut on the 2016 Cadillac ...

User comments : 1

Adjust slider to filter visible comments by rank

Display comments: newest first

beleg
1 / 5 (1) Nov 01, 2013
The language of the senses.
The sense of sight translates for the missing translator labeled (physiological) hearing.

The ultimate translator is the translator that knows all the (incoming) languages of the senses and translates those languages to the language(s) the mind uses to process and store the languages of the senses.

The ultimate translator is the mind (or brain).

The computer sciences explore the analogs for 'storage', 'process' and 'language'...
They follow the example put forth by no other (better) lesser model than one's self - you.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.