'Now you see it, now you don't'

February 16, 2009
Generated images

(PhysOrg.com) -- Queen Mary scientists have, for the first time, used computer artificial intelligence to create previously unseen types of pictures to explore the abilities of the human visual system.

Writing in the journal Vision Research, Professor Peter McOwan, and Milan Verma from Queen Mary's School of Electronic Engineering and Computer Science report the first published use of an artificial intelligence computer program to create pictures and stimuli to use in visual search experiments.

They found that when it comes to searching for a target in pictures, we don't have two special mechanisms in the brain - one for easy searches and one for hard - as has been previously suggested; but rather a single brain mechanism that just finds it harder to complete the task as it becomes more difficult.

The team developed a 'genetic algorithm', based on a simple model of evolution, that can breed a range of images and visual stimuli which were then used to test people's brain performance. By using artificial intelligence to design the test patterns, the team removed any likelihood of predetermining the results which could have occurred if researchers had designed the test pictures themselves.

Manually marked target

The AI generated a picture where a grid of small computer-created characters contains a small 'pop out' region of a different character. Professor Peter McOwan, who led the project, explains: "A 'pop out' is when you can almost instantly recognise the 'different' part of a picture, for example, a block of Xs against a background of Os. If it's a block of letter Ls against a background of Ts that's far harder for people to find. It was thought that we had two different brain mechanisms to cope with these sorts of cases, but our new approach shows we can get the AI to create new sorts of patterns where we can predictably set the level of difficulty of the 'spot the difference' task."

Milan Verma added: "Our AI system creates a unique range of different shapes that run from easy to spot differences, to hard to spot differences, through all points in between. When we then get people to actually perform the search task, we find that the time they take to perform the task varies in the way we would expect."

This new AI based experimental technique could also be applied to other experiments in the future, providing vision scientists with new ways to generate custom images for their experiments.

More information: ‘Generating customised experimental stimuli for visual search using Genetic Algorithms shows evidence for a continuum of search efficiency’ is published in the February edition of Vision Research.

Source: Queen Mary, University of London

Explore further: What the dog-fish and camel-bird can tell us about how our brains work

Related Stories

The robot that learns everything from scratch

June 8, 2015

Two researchers at NTNU have made a robot that learns like a young child. At least, that's the idea. The machine starts with nothing—it has to learn everything from scratch.

Microsoft Research project can interpret, caption photos

May 29, 2015

If you're surfing the web and you come across a photo of the Mariners' Felix Hernandez on the pitchers' mound at Safeco Field, chances are you'll quickly interpret that you are looking at a picture of a baseball player on ...

Images that fool computer vision raise security concerns

March 23, 2015

Computers are learning to recognize objects with near-human ability. But Cornell researchers have found that computers, like humans, can be fooled by optical illusions, which raises security concerns and opens new avenues ...

Recommended for you

How the finch changes its tune

August 3, 2015

Like top musicians, songbirds train from a young age to weed out errors and trim variability from their songs, ultimately becoming consistent and reliable performers. But as with human musicians, even the best are not machines. ...

Machine Translates Thoughts into Speech in Real Time

December 21, 2009

(PhysOrg.com) -- By implanting an electrode into the brain of a person with locked-in syndrome, scientists have demonstrated how to wirelessly transmit neural signals to a speech synthesizer. The "thought-to-speech" process ...

0 comments

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.