Teaching robots to see

December 15, 2014, Victoria University
Wellington Cable Car image with different salient object detection algorithms

Syed Saud Naqvi, a PhD student from Pakistan, is working on an algorithm to help computer programmes and robots to view static images in a way that is closer to how humans see.

Saud explains: "Right now computer programmes see things as very flat—they find it difficult to distinguish one object from another."

Facial recognition is already in use but, says one of Saud's supervisors Dr Will Browne, object detection is more complex than as there are many more variables.

Different object detection algorithms exist, some focus on patterns, textures or colours while others focus on the outline of a shape. Saud's extracts the most relevant information for decision-making by selecting the best algorithm to use on an individual image.

"The defining feature of an object is not always the same—sometimes it's the shape that defines it, sometimes it's the textures or colours. A picture of a field of flowers, for example, could need a different algorithm than an image of a cardboard box," says Saud.

Work on the algorithm was presented at this year's Genetic and Evolutionary Computational Conference (GECCO) in Vancouver and received a Best Paper Award.

Now the computer vision algorithm is going to be taken even further through a Victoria Summer Scholarship project to apply it to a dynamic, real-world robot for object detection tasks. This will take the algorithm from analysing static images to moving real-time scenes.

It is hoped that the algorithm will be able to help a robot to navigate its environment by being able to separate objects from their surrounds.

Dr Browne says there are a number of uses for this kind of both now and in the future. Immediate possibilities include use on social media and other websites to self-caption photos with information on the location or content of a photo.

"Most of the robots that have been dreamed up in pop culture would need this kind of technology to work. Currently, there aren't many home helper robots which can load a washing machine—this technology would help them do it."

It's early days but Dr Browne says in the future it's possible that this kind of imaging technology could be adapted to use in medical testing, such as identifying cancer cells in a mammogram.

Explore further: Smart object recognition algorithm doesn't need humans

Related Stories

New RFID technology helps robots find household objects

September 22, 2014

Mobile robots could be much more useful in homes, if they could locate people, places and objects. Today's robots usually see the world with cameras and lasers, which have difficulty reliably recognizing things and can miss ...

Robots recognize humans in disaster environments

October 21, 2014

Through a computational algorithm, a team of researchers from the University of Guadalajara (UDG) in Mexico, developed a neural network that allows a small robot to detect different patterns, such as images, fingerprints, ...

New algorithm finds you, even in untagged photos

December 2, 2013

A new algorithm designed at the University of Toronto has the power to profoundly change the way we find photos among the billions on social media sites such as Facebook and Flickr. This month, the United States Patent and ...

Recommended for you

Semimetals are high conductors

March 18, 2019

Researchers in China and at UC Davis have measured high conductivity in very thin layers of niobium arsenide, a type of material called a Weyl semimetal. The material has about three times the conductivity of copper at room ...

0 comments

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.