Do we think that machines can think?

Jul 09, 2008

When our PC goes on strike again we tend to curse it as if it was a human. The question of why and under what circumstances we attribute human-like properties to machines and how such processes manifest on a cortical level was investigated in a project led by Dr. Sören Krach and Prof. Tilo Kircher from the RWTH Aachen University (Clinic for Psychiatry and Psychotherapy) in cooperation with the Department of "Social Robotics" (Bielefeld University) and the Neuroimage Nord (Hamburg). The findings are published July 9 in the online, open-access journal PLoS ONE.

Almost daily, new accomplishments in the field of human robotics are presented in the media. Constructions of increasingly elaborate and versatile humanoid robots are reported and thus human-robot interactions accumulate in daily life. However, the question of how humans perceive these "machines" and attribute capabilities and "mental qualities" to them remains largely undiscovered.

In the fMRI study, reported in PLoS ONE, Krach and colleagues investigated how the increase of human-likeness of interaction partners modulates the participants' brain activity. In this study, participants were playing an easy computer game (the prisoners' dilemma game) against four different game partners: a regular computer notebook, a functionally designed Lego-robot, the anthropomorphic robot BARTHOC Jr. and a human. All game partners played an absolutely similar sequence, which was not, however, revealed to the participants.

The results clearly demonstrated that neural activity in the medial prefrontal cortex as well as in the right temporo-parietal junction linearly increased with the degree of "human-likeness" of interaction partners, i.e. the more the respective game partners exhibited human-like features, the more the participants engaged cortical regions associated with mental state attribution/mentalizing.

Further, in a debriefing questionnaire, participants stated having increasingly enjoyed the interactions most when their respective interaction partners displayed the most human features and accordingly evaluated their opponents as being more intelligent.

This study is the first ever to investigate the neuronal basics of direct human-robot interaction on a higher cognitive level such as mentalizing. Thus, the researchers expect the results of the study to impact long-lasting psychological and philosophical debates regarding human-machine interactions and especially the question of what causes humans to be perceived as human.

Citation: Krach S, Hegel F, Wrede B, Sagerer G, Binkofski F, et al. (2008) Can Machines Think? Interaction and Perspective Taking with Robots Investigated via fMRI. PLoS ONE 3(7): e2597. doi:10.1371/journal.pone.0002597 (www.plosone.org/doi/pone.0002597)

Source: Public Library of Science

Explore further: Education Dept awards $75M in innovation grants

add to favorites email to friend print save as pdf

Related Stories

Teaching computers the nuances of human conversation

Sep 12, 2014

Computer scientists have successfully developed programs to recognize spoken language, as in automated phone systems that respond to voice prompts and voice-activated assistants like Apple's Siri.

Making videogames more fun for passive audiences

Sep 10, 2014

You might think watching other people play videogames is boring, but researchers at the Microsoft Centre for Social Natural User Interfaces (SocialNUI) at the University of Melbourne say it does not have ...

Amazon could be ESPN of video games in Twitch deal

Aug 26, 2014

Amazon is hoping to become the ESPN of video games. The e-commerce giant is buying streaming platform Twitch Interactive for $970 million in cash as it seeks to take part in video gaming's growth as an onlin ...

Culture influences strategy in online coordination game

Jul 17, 2014

People strategize better with those from their own culture and they are poor at predicting the behaviour of those from different cultures, suggests a new study published in Proceedings of the National Ac ...

Recommended for you

Research band at Karolinska tuck Dylan gems into papers

Sep 29, 2014

(Phys.org) —A 17-year old bet among scientists at the Karolinska Institute has been a wager that whoever wrote the most articles with Dylan quotes before they retired would get a free lunch. Results included ...

A simulation game to help people prep for court

Sep 25, 2014

Preparing for court and appearing before a judge can be a daunting experience, particularly for people who are representing themselves because they can't afford a lawyer or simply don't know all the ropes ...

When finding 'nothing' means something

Sep 25, 2014

Scientists usually communicate their latest findings by publishing results as scientific papers in journals that are almost always accessible online (albeit often at a price), ensuring fast sharing of latest ...

User comments : 1

Adjust slider to filter visible comments by rank

Display comments: newest first

jeffsaunders
not rated yet Jul 09, 2008
I have even been known to wonder if my car was out to get me when it would act contrary and seem to be sabotaging my chances of having a pleasurable driving experience.

Computers seem to be able to do this with their go-slows and perverse habits of sticking in uppercase when you want lower case typing etc.

I think that the more tailored the adverse response the more personally we can take it and that conversely we can really adore machines that perform the way we want when we want them to.

How much more can be achieved when someone deliberately sets out to exploit this human character trait. Any machine that responds in a human way, whether deliberately trying to annoy us or deliberately trying to please us will trigger anthropomorphism (sp?).

For sure we had an article in here the other day about robot lovers and how likely they are to be quite common in the future. For sure I don't see any reason for someone to build a robot programmed to be extremely annoying except perhaps as a means of torture.

It could be a good research tool but I bet even the people that owned such a robot would not look after it as well as one programmed to be pleasing.