Self-directed robot can identify objects

February 23, 2016 by Rich Barlow

"That is a ball." "I do believe that is a cone." "Seems like a wonderful book." The voice is mechanical and flat, and anyone offering such banal commentary and sounding so bored would surely bomb in a job interview. But in this case, the observations are impressive. They're made by what looks like a two-foot-tall stack of hors d'oeuvre trays on wheels, careening around the floor and proclaiming its discoveries as its "eye," an attached camera, falls on them.

This robot has learned to recognize these specific objects—and to steer around obstacles, albeit clumsily—without human guidance. Its camera sends information about what it sees to a laptop sitting atop the robot; the laptop in turn communicates with a laboratory desktop, whose monitor flashes whatever the robot's camera catches.

"It's almost self-thinking" in its ability to get around roadblocks, says Emily Fitzgerald (ENG'16), who bestowed the 'bot with a brain as her summer 2015 project with Boston University's Undergraduate Research Opportunities Program (UROP), which provides funding for faculty-mentored research by undergrad students. More important than the robot's autonomous navigation, she says, is its ability to recognize specific objects.

Such self-guiding, object-spotting robots are a Holy Grail for scientists, with potential applications that include exploring distant planets' landscapes. In Fitzgerald's case, she used a deep neural network, a form of artificial intelligence that simulates brain neurons. Deep neural networks process huge amounts of data to solve problems, like recognizing a ball or cone.

The video will load shortly

"There's an algorithm that will take a ton of pictures of one object and will put it in and compile it all," says Fitzgerald. "Then we basically assign a number to it." The robot "will come upon an object and it will say, 'Oh, there's an object in front of me, let me think about it.' It will…find a picture that corresponds with the object, pick that number, and then it will be able to use that as a reference, so it can exclaim, 'Oh, it's a ball,' 'It's a cone,' or whatever object I had decided to teach it."

Massimiliano Versace (GRS'07), a BU College of Arts & Sciences research assistant professor and director of BU's Neuromorphics Lab, oversaw Fitzgerald's UROP project, and she had help from Lucas Neves (ENG'16), a volunteer in Versace's lab, and Matthew Luciw, a visiting researcher at BU's Center for Computational Neuroscience & Neural Technology.

Asked how hard it was to train their metallic pupil in object recognition, the team members laugh. "There were quite a few times where we did despair a little bit that, you know, this wasn't going to work," says Fitzgerald, who first had to master an unfamiliar programming language. Then the team needed to make sure that the array of different software in the project would work together "without crashing the system," she says.

Often, the software wasn't compatible, resulting in a somewhat ditsy robot. "Most of the time, it just didn't start," Neves says, ruefully recalling those tough moments. It also could get lost: sensors in its wheels tell the robot how far it's traveled. But "the wheels weren't moving at a constant rate, so whenever the would shoot off, it would think it had gone farther than it had because the wheels spun faster," says Fitzgerald.

So the Terminator it isn't. Whether Fitzgerald's project will yield a commercial application someday remains an open question, says Versace, but he has no doubt about the viability of this type of work. Versace heads Neurala, a BU spin-off company, and members of his lab met recently with NASA to discuss related research.

As for Fitzgerald, who was turned on to engineering after excelling at physics and math in high school, she says the project persuaded her to pursue a career in bioimaging. Someday, she says, robotic surgical devices running off neural networks will detect objects in human patients.

"I've actually taken this project and I've said, OK, what else can I do with it in the biomedical setting as well?" she says. "It's really shaped how I've thought about my future going forward."

Explore further: Researchers develop a robot that can learn to navigate through its environment guided by external stimuli (w/ Video)

Related Stories

Robot Boris learning to load a dishwasher (w/ Video)

September 12, 2014

Researchers at the University of Birmingham in the U.K. have set themselves an ambitious goal: programming a robot in such a way as to allow it to collect dishes, cutlery, etc. from a dinner table, and put it in a dishwasher. ...

A friendly robot

October 23, 2015

Researchers have developed a robot that adjusts its movements in order to avoid colliding with the people and objects around it. This provides new opportunities for more friendly interaction between people and machines.

Recommended for you

Samsung to disable Note 7 phones in recall effort

December 9, 2016

Samsung announced Friday it would disable its Galaxy Note 7 smartphones in the US market to force remaining owners to stop using the devices, which were recalled for safety reasons.

Swiss unveil stratospheric solar plane

December 7, 2016

Just months after two Swiss pilots completed a historic round-the-world trip in a Sun-powered plane, another Swiss adventurer on Wednesday unveiled a solar plane aimed at reaching the stratosphere.

Solar panels repay their energy 'debt': study

December 6, 2016

The climate-friendly electricity generated by solar panels in the past 40 years has all but cancelled out the polluting energy used to produce them, a study said Tuesday.

1 comment

Adjust slider to filter visible comments by rank

Display comments: newest first

Scottingham
not rated yet Feb 23, 2016
Doesn't Wolfram Alpha have an object recognition API? While that may have defeated the purpose of this particular project, it seems like it'd be a decent 'real world' solution.

The robot could have it's own neural net, but if it comes across an object it doesn't know it polls WA to get a match/label. It could then take multiple pictures of the object, along with others from Google Image search to augment its own neural network of that object. It would then no longer be dependent on the outside networks for that object.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.