Unlike in perfectly competitive markets, there are many instances of economic situations in which selfish rational behavior does not imply an efficient allocation of resources. One example is the financing of a public good. Say some neighbors are asked to contribute to the construction of a park of one hectare (10,000 m2) in the neighborhood. If the cost of building the park is €100,000 and if it must be shared equally among the 100 neighbors, each one would pay €1,000. If, in addition, all neighbors value the park more than those €1,000, the construction of the park is an efficient investment. Now, if the neighbors are not forced to pay those €1,000, but are asked to contribute voluntarily, the park will not be built, at least, not in that size. Each neighbor thinks: “If everyone contributes my contribution only changes the size of the park from 9,990 m2 to the planned 10,000 m2. The park is marginally smaller and I save all my money. If no one contributes, my €1,000 will only buy 10 m2. In any case, I prefer not to contribute. Ten more square meters of park are not worth €1,000 to me.” This is the free rider problem. It does not work to argue that “if everyone else thinks like you, then we will miss an opportunity to be better off”, since that does not change the way everyone else is thinking. This is why taxes must be enforced by law. That, also, is an example of the famous prisoners’ dilemma paradox.
Are there ways out of the prisoners’ dilemma logic? There are many. Some require a repeated interaction, so that the logic may change. In a repeated situation, the individuals may think differently: “Ok, I will start by contributing, and I will keep contributing in the future as long as everyone did their part in the past. Otherwise, I will stop my contribution.” Now there is an incentive to contribute: to keep the production of desired public goods; as well as a punishment for free-riders: if you do not contribute, no one else will and you will also lose.
We can complicate the example in different ways. First, the punishment is too unforgiving. Once we distrust each other, the distrust continues forever, and both the free-rider and the cooperative person are punished. This can be partially solved by making the punishment phase last only a few periods. Another complication is that the set of neighbors may change over time, or that the set of interactions does not involve all neighbors all the time. Today, A, B and C are involved in a public good problem that affects only the three of them. Tomorrow a situation arises between D and E. The next day, it is between B, D and F, and so on. If the behavior of each individual is perfectly known to the rest, and if all individuals expect to be involved in a kind of prisoners’ dilemma game in the future, then it may be on everyone’s interest to keep a reputation for being cooperative, so that when two or more individuals with a record for cooperative behavior meet, cooperation continues.
All of the above has been studied both theoretically and experimentally. In particular, most experiments on reputation issues have been conducted in a noise-free environment where, for every individual, the reputation consists on the past choices, an information that is readily available to the rest of the individuals. Recently, experimentalists have explored what happens if reputation is noise. This occurs, for instance, when, after an interaction, individuals are evaluated, and this evaluation is not an automatic record of the action taken, but another kind of information (like when we assign stars to movies). This is the approach taken in Masclet and Pénard (2012) 1, where the authors use a repeated trust game with an additional stage in which participants evaluate their partners, and where these evaluations (and not the actions) is what players know about each other. Within that design, they analyze how different reputation systems affect cooperation. They find that evaluations are strongly correlated with investment levels. Trust is highest in treatments in which participants simultaneously evaluate each other, and when participants sequentially evaluate each other, participants use negative evaluations as a means of reprisal against those participants who evaluated them negatively.
More recently, Greiff and Paetzel (2016) 2, use the same idea, but, instead of analyzing how the behavior changes with how evaluations are given, they study which kind of information about evaluations is relevant for fostering cooperation. In their experiment, participants are paired randomly and anonymously with a stranger. In the experiment each participant starts with an endowment of three monetary units, and must decide how many of these units to contribute to a common investment, and how many to keep for a private investment. The private investment multiplies the money by a factor of four, and the common one multiplies it by six (and then it is divided between the two partners). For example, if Player 1 keeps one unit for the private investment (and contributes 2 to the common one), and Player 2 keeps two for her private investment (and contributes 1 to the common one), the common investment will produce 6x(2+1) = 18. Thus, Player 1 will earn 13 (4 from his private investment, and 9 form the common one); and Player 2 will earn 17 (8 and 9). In this game, the players can get up to 18 each by contributing all their units in the common investment. This is however, a risky action, since any player may be tempted to invest nothing and get 12 (private) plus 9 (from the common investment) taking advantage of the contribution by the other player. In the experiment, individuals are paired this way 15 times. At the end of every encounter, individuals must evaluate their opponents. Before every play, individuals know how the opponent was evaluated, but not their actual past play. The experiment is conducted in three separate treatments. In one treatment, individuals also know the evaluations given to them, while in the other treatment they do not know the evaluations given to them. In a third control group no information on evaluations is given to participants.
The authors tested and confirmed three hypotheses:
- Higher contributions receive better evaluations.
- Participants contribute more when their partner has a better evaluation.
- Participants contribute more when their partner has a better evaluation. In the treatment where individuals also know their own evaluation, this effect will be stronger for participants whose own evaluation is good.
The last hypothesis shows a novel result. Although the experimental results detect more contributions when the information about the opponent is given (hypothesis 2), the real increase in contributions with respect to the control group occurs when information about own evaluations is also known. This means that, in contrast to a noise-free environment, a participant cannot infer her own reputation from her own past actions. The authors’ explanation is that information about one’s own evaluation facilitates conditional cooperation because it influences second-order beliefs: I need to know not only that I am a cooperator, but also I need to know that others know this.
- Masclet, D., Pénard, T. 2012. Do reputation feedback systems really prove trust among anonymous traders? An experimental study. Applied Economics 35, 4553–4573. ↩
- Greiff, M., and Paetzel, F. 2016. Second-order beliefs in reputation systems with endogenous evaluations – an experimental study. Games and Economic Behavior 97, 32–43. ↩