Do we think that machines can think?

Jul 09, 2008

When our PC goes on strike again we tend to curse it as if it was a human. The question of why and under what circumstances we attribute human-like properties to machines and how such processes manifest on a cortical level was investigated in a project led by Dr. Sören Krach and Prof. Tilo Kircher from the RWTH Aachen University (Clinic for Psychiatry and Psychotherapy) in cooperation with the Department of "Social Robotics" (Bielefeld University) and the Neuroimage Nord (Hamburg). The findings are published July 9 in the online, open-access journal PLoS ONE.

Almost daily, new accomplishments in the field of human robotics are presented in the media. Constructions of increasingly elaborate and versatile humanoid robots are reported and thus human-robot interactions accumulate in daily life. However, the question of how humans perceive these "machines" and attribute capabilities and "mental qualities" to them remains largely undiscovered.

In the fMRI study, reported in PLoS ONE, Krach and colleagues investigated how the increase of human-likeness of interaction partners modulates the participants' brain activity. In this study, participants were playing an easy computer game (the prisoners' dilemma game) against four different game partners: a regular computer notebook, a functionally designed Lego-robot, the anthropomorphic robot BARTHOC Jr. and a human. All game partners played an absolutely similar sequence, which was not, however, revealed to the participants.

The results clearly demonstrated that neural activity in the medial prefrontal cortex as well as in the right temporo-parietal junction linearly increased with the degree of "human-likeness" of interaction partners, i.e. the more the respective game partners exhibited human-like features, the more the participants engaged cortical regions associated with mental state attribution/mentalizing.

Further, in a debriefing questionnaire, participants stated having increasingly enjoyed the interactions most when their respective interaction partners displayed the most human features and accordingly evaluated their opponents as being more intelligent.

This study is the first ever to investigate the neuronal basics of direct human-robot interaction on a higher cognitive level such as mentalizing. Thus, the researchers expect the results of the study to impact long-lasting psychological and philosophical debates regarding human-machine interactions and especially the question of what causes humans to be perceived as human.

Citation: Krach S, Hegel F, Wrede B, Sagerer G, Binkofski F, et al. (2008) Can Machines Think? Interaction and Perspective Taking with Robots Investigated via fMRI. PLoS ONE 3(7): e2597. doi:10.1371/journal.pone.0002597 (www.plosone.org/doi/pone.0002597)

Source: Public Library of Science

Explore further: What I learned from debating science with trolls

add to favorites email to friend print save as pdf

Related Stories

Culture influences strategy in online coordination game

Jul 17, 2014

People strategize better with those from their own culture and they are poor at predicting the behaviour of those from different cultures, suggests a new study published in Proceedings of the National Ac ...

Google Glass taking fans closer to the action

Jun 30, 2014

Your favorite team is playing for the title, and you are in the middle of the field. Google Glass is slowly becoming more common in sports as teams and broadcasters try to bring fans closer to the action. ...

Harvesting the power of big data in agriculture

Jun 12, 2014

Connecting with old friends? There's an app for that. Logging your fitness goals? There's an app for that. Making your farm more sustainable? There, too, is an app for that. In our technology-driven world, farmers, scientists ...

Recommended for you

What I learned from debating science with trolls

Aug 20, 2014

I often like to discuss science online and I'm also rather partial to topics that promote lively discussion, such as climate change, crime statistics and (perhaps surprisingly) the big bang. This inevitably ...

Activists urge EU to scrap science advisor job

Aug 19, 2014

Nine major charities urged the European Commission on Tuesday to scrap a science advisor position it says puts too much power over sensitive policy into the hands of one person.

More to a skilled ear in music

Aug 15, 2014

The first pilot study in Australia to give musicians the skills and training to critically assess music by what they hear rather than what they see begins this month at the Sydney Conservatorium of Music.The study aims to ...

User comments : 1

Adjust slider to filter visible comments by rank

Display comments: newest first

jeffsaunders
not rated yet Jul 09, 2008
I have even been known to wonder if my car was out to get me when it would act contrary and seem to be sabotaging my chances of having a pleasurable driving experience.

Computers seem to be able to do this with their go-slows and perverse habits of sticking in uppercase when you want lower case typing etc.

I think that the more tailored the adverse response the more personally we can take it and that conversely we can really adore machines that perform the way we want when we want them to.

How much more can be achieved when someone deliberately sets out to exploit this human character trait. Any machine that responds in a human way, whether deliberately trying to annoy us or deliberately trying to please us will trigger anthropomorphism (sp?).

For sure we had an article in here the other day about robot lovers and how likely they are to be quite common in the future. For sure I don't see any reason for someone to build a robot programmed to be extremely annoying except perhaps as a means of torture.

It could be a good research tool but I bet even the people that owned such a robot would not look after it as well as one programmed to be pleasing.