Older adults don't speak 'robot,' study finds

Jul 16, 2013 by Susan Guibert
Older adults don’t speak 'robot,' study finds
Laura Carlson

(Phys.org) —In order to effectively program robots that ultimately could be used to aid seniors, researchers at the University of Notre Dame and University of Missouri studied the type of language older adults used when describing the location of a desired object to either a robot or human-like avatar. It turns out that seniors become tongue-tied when talking to robots.

The objective of the study was to see how well these natural directives (e.g., "My glasses are on the table next to the couch in the .") can be translated into commands, which would help program robots to navigate and find the target.

Using a simulated that resembled an eldercare setting, 64 in the study addressed either a robot or a person named Brian, giving instructions to fetch the target. The study found that when talking to the robot, participants preferred to use fewer words and to adopt a speaker's perspective, whereas when talking to Brian, participants used more words and preferred an addressee perspective.

"This research is important for the development of assistive devices for use in eldercare settings," says Laura Carlson, Notre Dame professor of and co-principal investigator of the study along with Marjorie Skubic of the University of Missouri.

"Older adults report wanting assistance from robots for fetching objects, and prefer to speak naturally to these devices, rather than use a more constrained interface. Thus, detailing how speak to robots and identifying how that conversation may differ from the way in which they speak to each other is necessary so that these preferences can be built into the programming of these devices."

There are two ways in which the location of the target can be indicated by the speaker: One can describe how to find it, as in this directive: "Go to the room on your right and go straight ahead and the book is right there in front of you," or one can describe where it is, as in this description: "The book is in the room on your right on the table at the far side of the room."

In the study, "how" descriptions were longer, contained more detail and were dynamically structured as compared to "where" descriptions. However, results showed that "where" descriptions were found to be more effective in conveying the target location.

The results of the study show that seniors prefer more streamlined communication with a task-oriented robot and do not necessarily want to speak to robots the same way they speak to other people.

"This study is an important first step to developing a system that adapts to the elderly users' language preferences instead of requiring them to adapt to the robot," says Carlson.

Explore further: An android opera: Japan's Shibuya plots new era of robot music

add to favorites email to friend print save as pdf

Related Stories

Making robots more trustworthy

Jul 03, 2013

Researchers from the University of Hertfordshire are part of a new £1.2 million project that aims to ensure that future robotic systems can be trusted by humans.

Recommended for you

Robots recognize humans in disaster environments

9 hours ago

Through a computational algorithm, a team of researchers from the University of Guadalajara (UDG) in Mexico, developed a neural network that allows a small robot to detect different patterns, such as images, ...

Japan toymaker unveils tiny talking, singing humanoid

Oct 15, 2014

Japanese toymaker Tomy on Wednesday unveiled a multi-talented humanoid robot, named "Robi jr.," which can converse using some 1,000 phrases and belt out about 50 songs, as well as move its limbs and head.

Can we teach robots right from wrong?

Oct 14, 2014

From performing surgery and flying planes to babysitting kids and driving cars, today's robots can do it all. With chatbots such as Eugene Goostman recently being hailed as "passing" the Turing test, it appears robots are ...

User comments : 1

Adjust slider to filter visible comments by rank

Display comments: newest first

krundoloss
not rated yet Jul 17, 2013
If you could design the robot to recognize objects, couldnt it have a database of the location of everything in the house, so you could just say "book" and the robot would ask "which book" and you would say "the Great Gatsby", and the robot would know where it is. We should take advantage of the robots ability to "know" things, and never forget them. They need situational awareness, and communication would come more easily. Identifying and tagging objects and storing the info in a database would make these robots seem more alive.