Generating 'oohs' and 'aahs': Vocal Joystick uses voice to surf the Internet

Oct 09, 2007
Vocal Joystick uses voice to surf the Internet
Doctoral student Brandi House uses Vocal Joystick to control the movement of a robotic arm. The screen on the lower right shows how the software analyzes her vocalizations to create instructions for the arm's movement. Credit: University of Washington

The Internet offers wide appeal to people with disabilities. But many of those same people find it frustrating or impossible to use a handheld mouse. Software developed at the University of Washington provides an alternative using one of the oldest and most versatile modes of communication: the human voice.

"There are many people who have perfect use of their voice who don't have use of their hands and arms," said Jeffrey Bilmes, a UW associate professor of electrical engineering. "I think there are several reasons why Vocal Joystick might be a better approach, or at least a viable alternative, to brain-computer interfaces." The tool's latest developments will be presented this month in Tempe, Ariz. at the Assets Conference on Computers and Accessibility.

Vocal Joystick detects sounds 100 times a second and instantaneously turns that sound into movement on the screen. Different vowel sounds dictate the direction: "ah," "ee," "aw" and "oo" and other sounds move the cursor one of eight directions. Users can transition smoothly from one vowel to another, and louder sounds make the cursor move faster. The sounds "k" and "ch" simulate clicking and releasing the mouse buttons.

Versions of Vocal Joystick exist for browsing the Web, drawing on a screen, controlling a cursor and playing a video game. A version also exists for operating a robotic arm, and Bilmes believes the technology could be used to control an electronic wheelchair.

Existing substitutes for the handheld mouse include eye trackers, sip-and-puff devices, head-tracking systems and other tools. Each technology has drawbacks. Eye-tracking devices are expensive and require that the eye simultaneously take in information and control the cursor, which can cause confusion. Sip-and-puff joysticks held in the mouth must be spit out if the user wants to speak, and can be tiring. Head-tracking devices require neck movement and expensive hardware.

Vocal Joystick requires only a microphone, a computer with a standard sound card and a user who can produce vocal sounds.

"A lot of people ask: 'Why don't you just use speech recognition"'" Bilmes said. "It would be very slow to move a cursor using discrete commands like 'move right' or 'go faster.' The voice, however, is able to do continuous commands quickly and easily." Early tests suggest that an experienced user of Vocal Joystick would have as much control as someone using a handheld device.

In the laboratory, doctoral student Jonathan Malkin, who helped develop the tool, uses Vocal Joystick to play a game called Fish Tale. It takes two minutes to train the program for Malkin's voice. He then moves the fish character easily around the screen, raising his voice slightly to speed up and avoid being eaten by a predator fish.

The newest development, which will be presented at the October meeting in Tempe, uses Vocal Joystick to control a robotic arm. The pitch of the tone moves the arm up and down; other commands are unchanged. This is the first time that vocal commands have been used to control a three-dimensional object, Bilmes said.

One initial concern, he said, was whether people would feel self-conscious using the tool.

"But once you try it you immediately forget what you're saying," Bilmes said. "I usually go to the New York Times' Web site to test the system and then I get distracted and start reading the news. I forget that I'm using it."

To test the device, the group has been working with about eight spinal-cord injury patients at the UW Medical Center since March.

"It's a really exciting idea. I think it has tremendous potential," said Kurt Johnson, a professor of rehabilitation medicine who is helping with the tests.

Bilmes said he hopes people will become more adept at using the system over time. Future research will incorporate more advanced controls that use more aspects of the human voice, such as repeated vocalizations, vibrato, degree of nasality and trills.

"While people use their voices to communicate with just words and phrases," Bilmes said, "the human voice is an incredibly flexible instrument, and can do so much more."

Source: University of Washington

Explore further: Download woes and HealthKit flaw bite iPhone software

add to favorites email to friend print save as pdf

Related Stories

22 elephants poached in Mozambique in two weeks

17 minutes ago

Poachers slaughtered 22 elephants in Mozambique in the first two weeks of September, environmentalists said Monday, warning that killing for ivory by organised syndicates was being carried out on an "industrialised" ...

Recommended for you

Where's the app for an earthquake warning?

7 hours ago

Among the many things the Bay Area learned from the recent shaker near Napa is that the University of California, Berkeley's earthquake warning system does indeed work for the handful of people who receive its messages, but ...

Hit 'Just Dance' game goes mobile Sept. 25

Sep 18, 2014

Smartphone lovers will get to show off moves almost anywhere with the Sept. 25 release of a free "Just Dance Now" game tuned for mobile Internet lifestyles.

Indie game developers sprouting at Tokyo Game Show

Sep 18, 2014

Nestled among the industry giants at the Tokyo Game Show Thursday are a growing number of small and independent games developers from Asia and Europe, all hoping they are sitting on the next Minecraft.

Review: Ambitious 'Destiny' lacks imagination

Sep 18, 2014

Midway through "Destiny," the new science fiction epic from "Halo" creators Bungie, a smug prince is musing on the hero's desire to visit a mysterious site on Mars.

User comments : 0