Pixels guide the way for the visually impaired

Feb 28, 2013
Pixels guide the way for the visually impaired

(Phys.org)—Images have been transformed into pixels and projected onto a headset to help the visually impaired in everyday tasks such as navigation, route-planning and object finding.

Developed using a and , the researchers from the University of Southern California hope the pixels can provide more information and enhance the vision of patients already fitted with retinal implants.

Lead author of the paper, James Weiland, said: "Blind people with retinal implants can detect motion and large objects and have improved orientation when walking. In most cases, they can also read large letters."

"At the moment, retinal implants are still low-resolution. We believe that our algorithm will enhance retinal implants by providing the user with more information when they are looking for a specific item."

The findings have been presented today, 1 March, in IOP Publishing's .

A total of 19 healthy subjects were involved in the study, who each undertook training first to get used to the pixelated vision. During the study, they were fitted with a (HMD) and took part in three different experiments: walking an obstacle course; finding objects on an otherwise empty table; and searching for a particular target in a cluttered environment.

A video camera was mounted onto the HMD which collected real-world information in the view of the subject. Mathematical algorithms converted the real-world images into pixels, which were then displayed onto the HMD's screen in front of the subject

The algorithms used intensity, saturation and edge-information from the camera's images to pick out the five most important, or salient, locations in the image. Blinking dots at the side of the display provided the subjects with additional directional cues if needed.

All three of the experiments were performed with and without cues. When subjects used the directional cues, their head movements, the time to complete the task and the number of errors were all significantly reduced.

The subjects learnt to adapt to pixelated vision in all of the tasks, suggesting that image processing algorithms can be used to provide greater confidence to patients when performing tasks, especially in a new environment.

It is possible that the device could be fitted with voice description so that the subjects are provided with cues such as "the red target is to the left".

"We are currently looking to take this a step further with object recognition, so instead of telling subjects that 'the red object is to the left', it will tell them that 'the soda can you want is to the left'," continued Weiland.

Explore further: An innovative system anticipates driver fatigue in the vehicle to prevent accidents

More information: "Performance of visually guided tasks using simulated prosthetic vision and saliency-based cues" N Parikh et al 2013 J. Neural Eng. 10 026017, www.iopscience.iop.org/1741-2552/10/2/026017

Related Stories

Google Glass theft-protector is granted patent

Jul 19, 2012

(Phys.org) -- Google has been granted a patent for a crime-busting technique that would lock down and sound the alarm if anyone stole a Google Glass customer’s $1500 headset. The patent application suggested ...

Recommended for you

Catching grease to cut grill pollution

Jul 21, 2014

A team of University of California, Riverside Bourns College of Engineering students have designed a tray that when placed under the grates of a backyard grill reduces by 70 percent the level of a harmful ...

User comments : 0