Economic games don't show altruism

Payoff-based learning explains the decline in cooperation in public goods games

Economic 'games' routinely used in the lab to probe people's preferences and thoughts find that humans are uniquely altruistic, sacrificing money to benefit strangers. A new study published in the journal Proceedings of the Royal Society B suggests that people don't actually play these games in the way researchers expect, and finds no evidence for altruistic behaviour.

'These results do not necessarily mean that humans are selfish rather than altruistic', cautions lead author Dr Maxwell Burton-Chellew from the Department of Zoology at the University of Oxford, 'But they do mean that current evidence cannot support claims about humans being altruistic in these laboratory experiments. While the previous results are robust and were replicated by our study, once you put in the proper controls, the previous interpretations of these results no longer stand up.'

I asked him about what these results are likely to mean for the field.

OxSciBlog: What do these economic games involve?

Maxwell Burton-Chellew: In the 'public goods' game that we and others have used, we bring a group of people into the lab, and we tell them that they're playing a game with other people, and that they have the chance to earn some money. They play anonymously through a computer, so they don't see each or talk to each other. They all have an initial sum of virtual money (which gets paid for real at the end), and they can either keep this money, or invest it in a common pot. Putting money into the common pot is more efficient, since we double the total contributions, but the people who benefit are the other players: the money gets divided equally, regardless of individual contributions. The volunteers play this game again and again, and we don't spell it out to them that investing is personally costly, although this information is in the instructions.

OSB: What have others found using these games?

MBC: The way to win the most money in this game is to be a 'free-loader': even if a player doesn't invest anything at all in the common pot, they still get an equal share of what everyone has contributed, so the 'rational' thing to do is to just keep all the money, and not contribute anything at all into the common pot.

However, this is not what people actually do - they still invest money into the common pot, even when they could calculate, after playing the game again and again, that they are getting less than they are putting in. Previous work has assumed that this is because people are altruistic: willing to help others at personal cost. This was surprising because traditional economic theory holds that organisms (including humans) are rational: they make consistent choices, making use of all available information, and they are self-interested, prioritizing their own interests above others.

But the Nobel prize-winning work from Daniel Kahneman and Amos Tversky showed that people are not always consistent in their decisions, and instead have all sorts of cognitive biases, leading to imperfect choices that limit their own welfare. This idea is now widely accepted: we know that people are not robots.

Now, more recent work appears to show that people are not self-interested either, overturning what is left of the idea of rational choices. However, to conclude that, you have to assume that people are making use of all the information in the instructions: working out in advance that they will make a loss, and then going ahead anyway, in the interests of others. But we know that this assumption is often untrue! So this feels a bit like having your cake and eating it too.

OSB: What did you find instead?

MBC: Previous experiments couldn't distinguish between an imperfect player and an unselfish player, and they assumed that it was the latter. Our study instead suggests that it is the former.

To do this, we analysed data we had collected previously, from 236 people playing the public goods games. We pitted three different rules that people could potentially use to play the game against each other, to find the one that best fitted the way that people actually played the game. The 'payoff maximizing' rule assumed that people wanted to earn as much money as possible, but they are initially unsure about how to do this. The 'prosocial' rule assumed that people are trying to get the most income for both themselves and the group, while the 'conditional cooperation' rule assumed a sort of 'tit-for-tat' behaviour, with people contributing only when other players do so.

We found that that the payoff maximizing rule explained our volunteers' behaviour much better than the other rules, and our analyses suggest that what researchers had previously thought were altruistic choices were actually just people learning to play the game.

OSB: Does this mean that the field needs to re-evaluate some widely accepted findings?

MBC: This is potentially quite a bit problem for the field, since all the work (and there is a lot!) using these economic games assumes that you can probe peoples' thoughts, desires and, importantly, preferences by using these games. But if they don't understand the game, it all falls apart. For example, some previous work uses these games to suggest that different people might have varying levels of altruism, with culture and specific genes influencing altruism. But these results could just reflect differences in how well people understand the game, how consistently they play it, whether they use all the information available to them or ignore it, or any combination of these factors.

OSB: How are you planning to explore these ideas further?

MBC: Well, it is interesting to contrast how animal behaviourists and economists study animals making choices. In non- animal studies, you need a lot of evidence to back up any claims of cognition, and you have to build up any claims for cognitive operations (such as altruism or rational decision-making) from the bottom up. Economics, on the other hand, often assumes that humans always make sensible decisions, and any claims of deviations from this instead need lots of evidence. So it's more of a top-down approach. Scientifically, this doesn't make any sense: there shouldn't be a difference in approaches to studying humans versus other animals. I am hoping to bring these two approaches together.

More specifically, we're tackling some of the assumptions behind the idea that there are different types of people when it comes to altruistic behaviour, and what factors promote cooperation amongst people.

We are also interested in looking at the interaction between culture and the spread of social learning.


Explore further

Does behavioral economics show people are altruistic or just confused?

More information: "Payoff-based learning explains the decline in cooperation in public goods games" Proc. R. Soc. B:2015282 20142678;DOI: 10.1098/rspb.2014.2678. Published 14 January 2015
Provided by Oxford University
Citation: Economic games don't show altruism (2015, January 16) retrieved 15 July 2019 from https://phys.org/news/2015-01-economic-games-dont-altruism.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.
30 shares

Feedback to editors

User comments

Jan 16, 2015
Am I wrong or isn't it that in the game these persons are put into play, the best income for everyone would come if everyone put all their money in the common pot, in which case everybody /and anyone/ gets their money doubled by the researchers /if everybody gets the same amount of money initially/. In that case isn't it actually a game of risk assessment and nothing to do with altruism, even maybe a team play against a third party /researchers/ in conditions of limited /or none/ communication?

Jan 16, 2015
Because intrinsic altruism does not exist.


Jan 16, 2015
Actually it's there in the article but they needed another more complicated virtual experiment to see what's obvious from the rules alone - that psychology researchers can't count, they messed the rules, misinterpreted them /well, that's not obvious from the rules alone/, most common players actually assessed the rules of the game better than researchers, and even hoped to some degree that other commoners will do so.

Jan 16, 2015
'Altruism' is the provisioning of others for abstract reward, like feeling good or thinking higher of one's self.

Giving away one's resources for no reason at all is a clinically significant psychiatric condition.

When tested on tribal people, who were unfamiliar with game play and so treated the game as real and not virtual, very altruistic and honest behaviour was noted.

This is quite unlike (opposite to) the behaviour of students who are familiar with the concept that games are virtual and need to be enticed with real money to treat the games as being real, a process that only achieves partial success.

The paper that proved that Game Theory is fundamentally flawed is:
'Economic Man' in Cross-cultural Perspective: Behavioral Experiments in 15 Small-scale Societies
Joseph Henrich, Herbert Gintis et al

"We found, first, that the canonical selfishness-based model fails in all of the societies studied."

Jan 16, 2015
Big grain of salt needs to be taken with this one. First- its a game, the players know that the objective of the game is to get as much money for themselves as possible, its like saying how competive/altruistic individual is when they play monopoly is how they behave in real life. Second- the players cannot see the other players faces, consequently other players remain utterly abstract, the players could be playing againt a computer fo all they really know. Testing for altruistic behaviour toward other people in an environment devoid of other people is invalid to say the least.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more