Combining computer vision and brain computer interface for faster mine detection

May 4, 2015
Subjects in the study viewed images while wearing an EEG headset. Credit: Neuromatters

Computer scientists at the University of California, San Diego, have combined sophisticated computer vision algorithms and a brain-computer interface to find mines in sonar images of the ocean floor. The study shows that the new method speeds detection up considerably, when compared to existing methods—mainly visual inspection by a mine detection expert.

"Computer vision and human vision each have their specific strengths, which combine to work well together," said Ryan Kastner, a professor of computer science at the Jacobs School of Engineering at UC San Diego. "For instance, computers are very good at finding subtle, but mathematically precise patterns while people have the ability to reason about things in a more holistic manner, to see the big picture. We show here that there is great potential to combine these approaches to improve performance."

Researchers worked with the U.S. Navy's Space and Naval Warfare Systems Center Pacific (SSC Pacific) in San Diego to collect a dataset of 450 containing 150 inert, bright-orange mines placed in test fields in San Diego Bay. An image dataset was collected with an underwater vehicle equipped with sonar. In addition, researchers trained their computer vision algorithms on a data set of 975 images of mine-like objects.

In the study, researchers first showed six subjects a complete dataset, before it had been screened by computer vision algorithms. Then they ran the image dataset through mine-detection computer vision algorithms they developed, which flagged images that most likely included mines. They then showed the results to subjects outfitted with an electroencephalogram (EEG) system, programmed to detect brain activity that showed subjects reacted to an image because it contained a salient feature—likely a mine. Subjects detected mines much faster when the images had already been processed by the algorithms. Computer scientists published their results recently in the IEEE Journal of Oceanic Engineering.

The algorithms are what's known as a series of classifiers, working in succession to improve speed and accuracy. The classifiers are designed to capture changes in pixel intensity between neighboring regions of an image. The system's goal is to detect 99.5 percent of true positives and only generate 50 percent of false positives during each pass through a classifier. As a result, true positives remain high, while false positives decrease with each pass.

Researchers took several versions of the dataset generated by the classifier and ran it by six subjects outfitted with the EEG gear, which had been first calibrated for each subject. It turns out that subjects performed best on the data set containing the most conservative results generated by the computer vision algorithms. They sifted through a total of 3,400 image chips sized at 100 by 50 pixels. Each chip was shown to the subject for only 1/5 of a second (0.2 seconds) —just enough for the EEG-related algorithms to determine whether subject's brain signals showed that they saw anything of interest.

All subjects performed better than when shown the full set of images without the benefit of prescreening by . Some subjects also performed better than the computer on their own.

"Human perception can do things that we can't come close to doing with computer vision," said Chris Barngrover, who earned a Ph.D. in Kastner's research group and is currently working at SSC Pacific. "But doesn't get tired or stressed. So it seemed natural for us to combine the two."

Explore further: Probabilistic programming does in 50 lines of code what used to take thousands

Related Stories

Pixels guide the way for the visually impaired

February 28, 2013

(Phys.org)—Images have been transformed into pixels and projected onto a headset to help the visually impaired in everyday tasks such as navigation, route-planning and object finding.

Images that fool computer vision raise security concerns

March 23, 2015

Computers are learning to recognize objects with near-human ability. But Cornell researchers have found that computers, like humans, can be fooled by optical illusions, which raises security concerns and opens new avenues ...

Helping computers see like people

February 24, 2015

UA cognitive scientist Mary Peterson, who studies human vision, will work with collaborators from four partner institutions, funded by an Office of Naval Research grant.

Training can improve visual field losses from glaucoma

April 17, 2014

(HealthDay)—Visual field loss from glaucoma is in part reversible by behavioral, computer-based, online controlled vision training, according to a study published in the April issue of JAMA Ophthalmology.

Recommended for you

Chinese fans trash blackout as Google AI wins again

May 25, 2017

Chinese netizens fumed Thursday over a government ban on live coverage of Google algorithm AlphaGo's battle with the world's top Go player, as the programme clinched their three-match series in the ancient board game.

1 comment

Adjust slider to filter visible comments by rank

Display comments: newest first

gkam
1 / 5 (1) May 04, 2015
For every gigabuck we blow on the "Defense" Killing Machine, we need ten more to clean up after it.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.