Older adults don't speak 'robot,' study finds

Jul 16, 2013 by Susan Guibert
Older adults don’t speak 'robot,' study finds
Laura Carlson

(Phys.org) —In order to effectively program robots that ultimately could be used to aid seniors, researchers at the University of Notre Dame and University of Missouri studied the type of language older adults used when describing the location of a desired object to either a robot or human-like avatar. It turns out that seniors become tongue-tied when talking to robots.

The objective of the study was to see how well these natural directives (e.g., "My glasses are on the table next to the couch in the .") can be translated into commands, which would help program robots to navigate and find the target.

Using a simulated that resembled an eldercare setting, 64 in the study addressed either a robot or a person named Brian, giving instructions to fetch the target. The study found that when talking to the robot, participants preferred to use fewer words and to adopt a speaker's perspective, whereas when talking to Brian, participants used more words and preferred an addressee perspective.

"This research is important for the development of assistive devices for use in eldercare settings," says Laura Carlson, Notre Dame professor of and co-principal investigator of the study along with Marjorie Skubic of the University of Missouri.

"Older adults report wanting assistance from robots for fetching objects, and prefer to speak naturally to these devices, rather than use a more constrained interface. Thus, detailing how speak to robots and identifying how that conversation may differ from the way in which they speak to each other is necessary so that these preferences can be built into the programming of these devices."

There are two ways in which the location of the target can be indicated by the speaker: One can describe how to find it, as in this directive: "Go to the room on your right and go straight ahead and the book is right there in front of you," or one can describe where it is, as in this description: "The book is in the room on your right on the table at the far side of the room."

In the study, "how" descriptions were longer, contained more detail and were dynamically structured as compared to "where" descriptions. However, results showed that "where" descriptions were found to be more effective in conveying the target location.

The results of the study show that seniors prefer more streamlined communication with a task-oriented robot and do not necessarily want to speak to robots the same way they speak to other people.

"This study is an important first step to developing a system that adapts to the elderly users' language preferences instead of requiring them to adapt to the robot," says Carlson.

Explore further: Telerobotics puts robot power at your fingertips

add to favorites email to friend print save as pdf

Related Stories

Making robots more trustworthy

Jul 03, 2013

Researchers from the University of Hertfordshire are part of a new £1.2 million project that aims to ensure that future robotic systems can be trusted by humans.

Recommended for you

Nokia profits rise after sale of handset division

5 minutes ago

(AP)—Telecommunications and wireless equipment maker Nokia Corp. saw its shares surge on Thursday after it reported higher profits and an improved earnings outlook in the wake of its sale to Microsoft of its troubled handset ...

Twitter admits to diversity problem in workforce

2 hours ago

(AP)—Twitter acknowledged Wednesday that it has been hiring too many white and Asian men to fill high-paying technology jobs, just like several other major companies in Silicon Valley.

Swiss drug maker Roche posts 7 percent profit drop

2 hours ago

(AP)—Swiss drugmaker Roche Holding AG on Thursday posted a net profit drop of 7 percent compared with a year ago, weighed down by a strong Swiss franc and charges from one of its diagnostic units.

User comments : 1

Adjust slider to filter visible comments by rank

Display comments: newest first

krundoloss
not rated yet Jul 17, 2013
If you could design the robot to recognize objects, couldnt it have a database of the location of everything in the house, so you could just say "book" and the robot would ask "which book" and you would say "the Great Gatsby", and the robot would know where it is. We should take advantage of the robots ability to "know" things, and never forget them. They need situational awareness, and communication would come more easily. Identifying and tagging objects and storing the info in a database would make these robots seem more alive.