Does behavioral economics show people are altruistic or just confused?
Behavioral economists have revolutionized the standard view of human nature. No longer are people presumed to be purely selfish, only acting in their own interest. Hundreds of experiments appear to show that most people are pro-social, preferring to sacrifice their own success in order to benefit others. That's altruism.
If the interpretations of these experiments are true, then we have to rip up the textbooks for both economics and evolutionary biology! Economic and evolutionary models assume that individuals only act unselfishly when they stand to benefit some way. Yet humans appear to be unique in the animal kingdom as experiments suggest they willingly sacrifice their own success on behalf of strangers they will never meet. These results have led researchers to look for the evolutionary precursors of such exceptional altruism by also running these kinds of experiments with non-human primates.
But are these altruism experiments really evidence of humans being special? Our new study says probably not.
What the experiments do and don't show
To investigate altruism, behavioral economists ask people to play games with real money at stake. In one type, known as public-goods games, they use abstract economic scenarios that resemble real-life situations, such as paying your taxes or neighbors working together to build a fence. In the most basic version, a player decides how much of his money to share with other players and how much to keep for himself.
Hundreds of studies have shown that people will typically start out paying around half of their lab money toward an unselfish option. But when the game is repeated, they tend to pay less and less over time. The advantage of such games over hypothetical surveys is that people cannot lie about what they would do – they have to put their money where their mouth is. The disadvantage is that people can find such games confusing and unfamiliar. After all, we didn't evolve to play lab games.
This raises the possibility that people are maybe not being altruistic in such experiments but are in fact just uncertain or confused and making mistakes (from the perspective of trying to make money). This could explain why cooperation typically declines over time, as people learn to play the game better. Our recent study investigates this question.
Mistakes or altruism?
The typical experiment design means that whenever anyone makes a mistake – by failing to maximize their earnings – they automatically help other players, making it difficult to distinguish mistakes from pro-sociality. To investigate, we ran a study that included a game design whereby all 'mistakes' automatically had anti-social outcomes. The best thing to do financially was pay to help the group, as this also helped you.
Surprisingly, we found that mistakes were still just as likely, with players hurting themselves and the group simultaneously. In this case, the usual interpretation would have to argue that people play these games both pro- and anti-socially. We think it's simpler to conclude that people are just not that motivated by the effects their choices have on others, at least in these cooperation games.
We later showed that groups of players we knew to be uncertain (because we did not give them the instructions) and to be self-interested (because we told them they were playing against a "black-box" rather than in a group of people) played the game the same way as people who had the instructions and knew they were playing with each other! This suggested that people in the standard game were also playing in a similar way, and not motivated by the actions of others.
Not everyone agrees with our view. Instead, some argue that people are pro-socially motivated, that they perfectly understand the game, but choose to cooperate conditionally - that is, only if others cooperate. This is supported by evidence that people appear to copy what others do in such games.
Learning from your mistakes
In our new study, we tried to predict when players would increase and decrease their contributions based on what happened to them in previous rounds. We considered three different behavioral rules players could use, and tested them in three versions of the game.
One way to play we called payoff-based learning. If a player's earnings had gone up, this was a success, and we predicted they would continue to do what they were doing to make that happen. If their earnings went down over time – termed failure – they would reverse their behavior and switch strategy. This rule would fit a player who's initially uncertain, but self-interested. We found that this payoff-based learning rule significantly explained individuals' decisions over time in all three versions of the game.
Other behavioral rules we considered were pro-social. They didn't explain how people acted. One judged success by how much money the other players made – and wasn't significant for any of the games.
Another judged success by how cooperative groupmates were – the more altruistic they were, the more altruistic the individual would become. This conditional-cooperation rule was only significant in the standard game, when players know about their own earnings and the decisions of others. Its significance disappeared in an enhanced version of the game when players had more information about groupmates' earnings. This suggests that when players appear to be conditionally cooperating on the basis of what others do, they may actually just be copying the actions of others because they are uncertain. This would explain why they copy less in the enhanced game, as the costs and benefits of their actions are clearer.
Revolutionary yes, altruistic no
Standard economic theory relies on the idea of rational choice, maintaining that people will consistently reveal their preferences through their selections. This means that your costly decisions can be used to measure your preferences and desires. (How much do you want an apple? Do you prefer apples to oranges?)
However, one of the early triumphs of the behavioral economics approach was to show that people don't act like rational self-interested robots. Instead, people are imperfect and often inconsistent (preferring apples to bananas, and bananas to carrots, but carrots to apples), making choices that harm their own welfare.
Yet paradoxically, similar studies reach a different conclusion when people have to make social decisions. Many researchers choose to keep the rationality assumption but reject the self-interested assumption and conclude that people are not making mistakes, but instead have a preference to be altruistic. We consider our study to support the idea that over time in these public-goods games, people are probably just learning how to improve their own income, independent of any altruistic impulses.
This story is published courtesy of The Conversation (under Creative Commons-Attribution/No derivatives).