Pixels guide the way for the visually impaired

Pixels guide the way for the visually impaired

(Phys.org)—Images have been transformed into pixels and projected onto a headset to help the visually impaired in everyday tasks such as navigation, route-planning and object finding.

Developed using a and , the researchers from the University of Southern California hope the pixels can provide more information and enhance the vision of patients already fitted with retinal implants.

Lead author of the paper, James Weiland, said: "Blind people with retinal implants can detect motion and large objects and have improved orientation when walking. In most cases, they can also read large letters."

"At the moment, retinal implants are still low-resolution. We believe that our algorithm will enhance retinal implants by providing the user with more information when they are looking for a specific item."

The findings have been presented today, 1 March, in IOP Publishing's .

A total of 19 healthy subjects were involved in the study, who each undertook training first to get used to the pixelated vision. During the study, they were fitted with a (HMD) and took part in three different experiments: walking an obstacle course; finding objects on an otherwise empty table; and searching for a particular target in a cluttered environment.

A video camera was mounted onto the HMD which collected real-world information in the view of the subject. Mathematical algorithms converted the real-world images into pixels, which were then displayed onto the HMD's screen in front of the subject

The algorithms used intensity, saturation and edge-information from the camera's images to pick out the five most important, or salient, locations in the image. Blinking dots at the side of the display provided the subjects with additional directional cues if needed.

All three of the experiments were performed with and without cues. When subjects used the directional cues, their head movements, the time to complete the task and the number of errors were all significantly reduced.

The subjects learnt to adapt to pixelated vision in all of the tasks, suggesting that image processing algorithms can be used to provide greater confidence to patients when performing tasks, especially in a new environment.

It is possible that the device could be fitted with voice description so that the subjects are provided with cues such as "the red target is to the left".

"We are currently looking to take this a step further with object recognition, so instead of telling subjects that 'the red object is to the left', it will tell them that 'the soda can you want is to the left'," continued Weiland.


Explore further

Stanford researchers develop the next generation of retinal implants

More information: "Performance of visually guided tasks using simulated prosthetic vision and saliency-based cues" N Parikh et al 2013 J. Neural Eng. 10 026017, www.iopscience.iop.org/1741-2552/10/2/026017
Journal information: Journal of Neural Engineering

Citation: Pixels guide the way for the visually impaired (2013, February 28) retrieved 5 April 2020 from https://phys.org/news/2013-02-pixels-visually-impaired.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.
0 shares

Feedback to editors

User comments