Researcher seeks to lessen failures in computerized visual recognition programs

Apr 17, 2014 by Steven D Mackay

Computer programs that use facial or image recognition systems—be it security cameras or applications that search databases for everything from photographs of wanted criminals to images of bears – are like any other technological marvel. They may be fast and versatile, but they frequently fail, and are limited to one-way communication, taking orders from the user.

Devi Parikh, an assistant professor with the Virginia Tech Bradley Department of Electrical and Computer Engineering, wants to change that, creating a two-way communication path between user and and algorithms. The two way system won't directly prevent failures and faults, but it will help users better diagnose computer problems and correct errors, and prevent future occurrences.

"Models that characterize the failures of a system can then also be used to predict oncoming ," said Parikh, whose research project is at the center of a $150,000 U.S. Army Research Office Young Investigators' Award, and well could have future applications in a wide variety of . "Such a warning signal can be valuable to a downstream application that uses the output of the machine perception system as input. These techniques are broadly applicable to many research and development efforts on intelligent and autonomous systems."

To wit, using programs – or almost any computer system – that prove faulty or make errors is now much akin to talking with a small child who may be ill: The adult can tell something is wrong with the child from his or her behavior, but the child does not have the vocabulary to express why he or she is feeling ill. The parent must guess and/or seek help in a diagnosis, or the child remains sick.

Computers act much the same way during a system or program crash or failure. When a facial recognition system fails to recognize or track a person's face, it may not be able to tell the user – likely law enforcement – why it is failing or even that it is failing. The user must guess if the program is failing because of, say, low or harsh light or because the subject has his or her face at an odd angle, askew from the lens.

Parikh wants to remove the guesswork, allowing the system or application to directly tell the user the cause of failure.

Once the user is aware of the fault, they can take action to correct the error – switch to a different camera to capture the person's face from another angle or lower the aperture of the lens to take in less light, thereby avoiding excessive glare – and obtain a better, usable image.

Much the same way, if a computer is programmed to sort through thousands of images for photographs of bears, when its initial model is based only on images of a grizzly standing near a lake, the system well can mistake the body of water as directly associated with a bear, and miss images of polar bears as it was only instructed to search for one type of the species. Parikh wants to create systems smart enough to ask the user questions that will avoid such errors or shortcomings, thus saving the user's time and likely, money.

"A semantic characterization of the failure modes of a system can thus allow us to design better systems in the future, as well as to make today's computer vision systems more usable even with their existing imperfections," Parikh wrote in her proposal.

Explore further: New project ensures 'what you see is what you send'

add to favorites email to friend print save as pdf

Related Stories

New project ensures 'what you see is what you send'

Feb 25, 2014

Imagine a user who intends to send $2 to a friend through PayPal. Embedded malware in the user's laptop, however, converts the $2 transaction into a $2,000 transfer to the account of the malware author instead.

Patent talk: Google sharpens contact lens vision

Apr 16, 2014

(Phys.org) —A report from Patent Bolt brings us one step closer to what Google may have in mind in developing smart contact lenses. According to the discussion Google is interested in the concept of contact ...

Recommended for you

Forging a photo is easy, but how do you spot a fake?

Nov 21, 2014

Faking photographs is not a new phenomenon. The Cottingley Fairies seemed convincing to some in 1917, just as the images recently broadcast on Russian television, purporting to be satellite images showin ...

Algorithm, not live committee, performs author ranking

Nov 21, 2014

Thousands of authors' works enter the public domain each year, but only a small number of them end up being widely available. So how to choose the ones taking center-stage? And how well can a machine-learning ...

Professor proposes alternative to 'Turing Test'

Nov 19, 2014

(Phys.org) —A Georgia Tech professor is offering an alternative to the celebrated "Turing Test" to determine whether a machine or computer program exhibits human-level intelligence. The Turing Test - originally ...

Image descriptions from computers show gains

Nov 18, 2014

"Man in black shirt is playing guitar." "Man in blue wetsuit is surfing on wave." "Black and white dog jumps over bar." The picture captions were not written by humans but through software capable of accurately ...

User comments : 0

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.