Limits...
Evolution of Conformity in Social Dilemmas.

Dong Y, Li C, Tao Y, Zhang B - PLoS ONE (2015)

Bottom Line: We are particularly interested in the tit-for-tat (TFT) strategy, which is the well-known conforming strategy in theoretical and empirical studies.The stability analysis of adaptive dynamics shows that conformity in general promotes the evolution of cooperation, and that a regime of cooperation can be established in an AllD population through TFT-like strategies.These results provide insight into the emergence of cooperation in social dilemma games.

View Article: PubMed Central - PubMed

Affiliation: School of Statistics, Beijing Normal University, Beijing, China.

ABSTRACT
People often deviate from their individual Nash equilibrium strategy in game experiments based on the prisoner's dilemma (PD) game and the public goods game (PGG), whereas conditional cooperation, or conformity, is supported by the data from these experiments. In a complicated environment with no obvious "dominant" strategy, conformists who choose the average strategy of the other players in their group could be able to avoid risk by guaranteeing their income will be close to the group average. In this paper, we study the repeated PD game and the repeated m-person PGG, where individuals' strategies are restricted to the set of conforming strategies. We define a conforming strategy by two parameters, initial action in the game and the influence of the other players' choices in the previous round. We are particularly interested in the tit-for-tat (TFT) strategy, which is the well-known conforming strategy in theoretical and empirical studies. In both the PD game and the PGG, TFT can prevent the invasion of non-cooperative strategy if the expected number of rounds exceeds a critical value. The stability analysis of adaptive dynamics shows that conformity in general promotes the evolution of cooperation, and that a regime of cooperation can be established in an AllD population through TFT-like strategies. These results provide insight into the emergence of cooperation in social dilemma games.

No MeSH data available.


Related in: MedlinePlus

Phase portrait of the adaptive dynamics Eqs (1) and (2).For each of the three graphs, there is a curve p = p*(x) (the blue dash curve) separating the (x, p)-plane such that dx/dt > 0 for p > p*(x) and dx/dt < 0 for p < p*(x). Stable equilibria and unstable equilibria of the adaptive dynamics are marked by solid dots and empty dots, respectively. Trajectories with large initial p converge to x = 1, and with small initial p converge to x = 0. (a) Repeated PD game with , R = 4, P = 2, S = 0 and T = 5. (b) Repeated PD game with , R = 3, P = 1, S = 0 and T = 4. Because R + P = S + T, there exists a critical p* = 0.1, where a trajectory of Eq (2) starting from (x, p) converges to (1, p) if p > p*, and converges to (0, p) if p < p*. (c) Repeated PD game with , R = 3, P = 1, S = 0 and T = 5.
© Copyright Policy
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC4556697&req=5

pone.0137435.g002: Phase portrait of the adaptive dynamics Eqs (1) and (2).For each of the three graphs, there is a curve p = p*(x) (the blue dash curve) separating the (x, p)-plane such that dx/dt > 0 for p > p*(x) and dx/dt < 0 for p < p*(x). Stable equilibria and unstable equilibria of the adaptive dynamics are marked by solid dots and empty dots, respectively. Trajectories with large initial p converge to x = 1, and with small initial p converge to x = 0. (a) Repeated PD game with , R = 4, P = 2, S = 0 and T = 5. (b) Repeated PD game with , R = 3, P = 1, S = 0 and T = 4. Because R + P = S + T, there exists a critical p* = 0.1, where a trajectory of Eq (2) starting from (x, p) converges to (1, p) if p > p*, and converges to (0, p) if p < p*. (c) Repeated PD game with , R = 3, P = 1, S = 0 and T = 5.

Mentions: Let us now consider the situation with mutation and investigate the evolutionary dynamics on the (x, p)-plane. Based on the standard adaptive dynamics model [29,48], we assume that mutations occur rarely and locally, where a mutant adopts a new strategy that adds a small random value on the resident strategy. This assumption implies that a mutant will either vanish or take over the population before the next mutation occurs, and the mutational jumps are small that the resident strategy changes continuously [48]. Thus, the evolution of resident strategy (x, p) in (0,1)2 can be described by the following adaptive dynamics:dxdt=R−P2(1−ω)−T−S2(1−ω(1−2p))+(1−2x)S+T−R−P2(1−ω(p2+(1−p)2)),dpdt=(S+T−R−P)x(1−x)ω(2p−1)(1−ω(p2+(1−p)2))2,(1)where dx/dt < 0 for p → 0 (see Section A in S1 Text). If dx/dt > 0 for p → 1 (this happens when is large), then there exists a curve p = p*(x) separating the (x, p)-plane such that dx/dt > 0 for p > p*(x) and dx/dt < 0 for p < p*(x), i.e., x tends to increase when p is large and tends to decrease when p is small (see Fig 2 and Section A in S1 Text). The intuition is simple: If your opponent is a conformist, then cooperating in the first round will obtain a higher payoff because your opponent will follow your choice. If the opponent is not affected by your behaviors, however, defection is the best choice. In particular, when R + P = S + T, dx/dt is independent of x, and p keeps to a constant. In this case, Eq (1) can be simplified as:dxdt=R−P2(1−ω)−T−S2(1−ω(1−2p)),dpdt=0.(2)


Evolution of Conformity in Social Dilemmas.

Dong Y, Li C, Tao Y, Zhang B - PLoS ONE (2015)

Phase portrait of the adaptive dynamics Eqs (1) and (2).For each of the three graphs, there is a curve p = p*(x) (the blue dash curve) separating the (x, p)-plane such that dx/dt > 0 for p > p*(x) and dx/dt < 0 for p < p*(x). Stable equilibria and unstable equilibria of the adaptive dynamics are marked by solid dots and empty dots, respectively. Trajectories with large initial p converge to x = 1, and with small initial p converge to x = 0. (a) Repeated PD game with , R = 4, P = 2, S = 0 and T = 5. (b) Repeated PD game with , R = 3, P = 1, S = 0 and T = 4. Because R + P = S + T, there exists a critical p* = 0.1, where a trajectory of Eq (2) starting from (x, p) converges to (1, p) if p > p*, and converges to (0, p) if p < p*. (c) Repeated PD game with , R = 3, P = 1, S = 0 and T = 5.
© Copyright Policy
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC4556697&req=5

pone.0137435.g002: Phase portrait of the adaptive dynamics Eqs (1) and (2).For each of the three graphs, there is a curve p = p*(x) (the blue dash curve) separating the (x, p)-plane such that dx/dt > 0 for p > p*(x) and dx/dt < 0 for p < p*(x). Stable equilibria and unstable equilibria of the adaptive dynamics are marked by solid dots and empty dots, respectively. Trajectories with large initial p converge to x = 1, and with small initial p converge to x = 0. (a) Repeated PD game with , R = 4, P = 2, S = 0 and T = 5. (b) Repeated PD game with , R = 3, P = 1, S = 0 and T = 4. Because R + P = S + T, there exists a critical p* = 0.1, where a trajectory of Eq (2) starting from (x, p) converges to (1, p) if p > p*, and converges to (0, p) if p < p*. (c) Repeated PD game with , R = 3, P = 1, S = 0 and T = 5.
Mentions: Let us now consider the situation with mutation and investigate the evolutionary dynamics on the (x, p)-plane. Based on the standard adaptive dynamics model [29,48], we assume that mutations occur rarely and locally, where a mutant adopts a new strategy that adds a small random value on the resident strategy. This assumption implies that a mutant will either vanish or take over the population before the next mutation occurs, and the mutational jumps are small that the resident strategy changes continuously [48]. Thus, the evolution of resident strategy (x, p) in (0,1)2 can be described by the following adaptive dynamics:dxdt=R−P2(1−ω)−T−S2(1−ω(1−2p))+(1−2x)S+T−R−P2(1−ω(p2+(1−p)2)),dpdt=(S+T−R−P)x(1−x)ω(2p−1)(1−ω(p2+(1−p)2))2,(1)where dx/dt < 0 for p → 0 (see Section A in S1 Text). If dx/dt > 0 for p → 1 (this happens when is large), then there exists a curve p = p*(x) separating the (x, p)-plane such that dx/dt > 0 for p > p*(x) and dx/dt < 0 for p < p*(x), i.e., x tends to increase when p is large and tends to decrease when p is small (see Fig 2 and Section A in S1 Text). The intuition is simple: If your opponent is a conformist, then cooperating in the first round will obtain a higher payoff because your opponent will follow your choice. If the opponent is not affected by your behaviors, however, defection is the best choice. In particular, when R + P = S + T, dx/dt is independent of x, and p keeps to a constant. In this case, Eq (1) can be simplified as:dxdt=R−P2(1−ω)−T−S2(1−ω(1−2p)),dpdt=0.(2)

Bottom Line: We are particularly interested in the tit-for-tat (TFT) strategy, which is the well-known conforming strategy in theoretical and empirical studies.The stability analysis of adaptive dynamics shows that conformity in general promotes the evolution of cooperation, and that a regime of cooperation can be established in an AllD population through TFT-like strategies.These results provide insight into the emergence of cooperation in social dilemma games.

View Article: PubMed Central - PubMed

Affiliation: School of Statistics, Beijing Normal University, Beijing, China.

ABSTRACT
People often deviate from their individual Nash equilibrium strategy in game experiments based on the prisoner's dilemma (PD) game and the public goods game (PGG), whereas conditional cooperation, or conformity, is supported by the data from these experiments. In a complicated environment with no obvious "dominant" strategy, conformists who choose the average strategy of the other players in their group could be able to avoid risk by guaranteeing their income will be close to the group average. In this paper, we study the repeated PD game and the repeated m-person PGG, where individuals' strategies are restricted to the set of conforming strategies. We define a conforming strategy by two parameters, initial action in the game and the influence of the other players' choices in the previous round. We are particularly interested in the tit-for-tat (TFT) strategy, which is the well-known conforming strategy in theoretical and empirical studies. In both the PD game and the PGG, TFT can prevent the invasion of non-cooperative strategy if the expected number of rounds exceeds a critical value. The stability analysis of adaptive dynamics shows that conformity in general promotes the evolution of cooperation, and that a regime of cooperation can be established in an AllD population through TFT-like strategies. These results provide insight into the emergence of cooperation in social dilemma games.

No MeSH data available.


Related in: MedlinePlus