Limits...
The social Bayesian brain: does mentalizing make a difference when we learn?

Devaine M, Hollard G, Daunizeau J - PLoS Comput. Biol. (2014)

Bottom Line: Moreover, we find that participants' choice sequences are best explained by sophisticated mentalizing Bayesian learning models only in the social framing.This study is the first demonstration of the added-value of mentalizing on learning in the context of repeated social interactions.Importantly, our results show that we would not be able to decipher intentional behaviour without a priori attributing mental states to others.

View Article: PubMed Central - PubMed

Affiliation: Brain and Spine Institute, Paris, France; INSERM, Paris, France.

ABSTRACT
When it comes to interpreting others' behaviour, we almost irrepressibly engage in the attribution of mental states (beliefs, emotions…). Such "mentalizing" can become very sophisticated, eventually endowing us with highly adaptive skills such as convincing, teaching or deceiving. Here, sophistication can be captured in terms of the depth of our recursive beliefs, as in "I think that you think that I think…" In this work, we test whether such sophisticated recursive beliefs subtend learning in the context of social interaction. We asked participants to play repeated games against artificial (Bayesian) mentalizing agents, which differ in their sophistication. Critically, we made people believe either that they were playing against each other, or that they were gambling like in a casino. Although both framings are similarly deceiving, participants win against the artificial (sophisticated) mentalizing agents in the social framing of the task, and lose in the non-social framing. Moreover, we find that participants' choice sequences are best explained by sophisticated mentalizing Bayesian learning models only in the social framing. This study is the first demonstration of the added-value of mentalizing on learning in the context of repeated social interactions. Importantly, our results show that we would not be able to decipher intentional behaviour without a priori attributing mental states to others.

Show MeSH
Bayesian model comparison.Left: exceedance probabilities of the no-ToM (T-) and ToM (T+) model families (red: non-social framing, blue: social framing). Right: exceedance probabilities of the no-ToM/non-Bayesian (T-B-), no-ToM/Bayesian (T-B+), ToM/bayesian (T+B+) and Tom/non-Bayesian (T+B-) model families.
© Copyright Policy
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC4256068&req=5

pcbi-1003992-g006: Bayesian model comparison.Left: exceedance probabilities of the no-ToM (T-) and ToM (T+) model families (red: non-social framing, blue: social framing). Right: exceedance probabilities of the no-ToM/non-Bayesian (T-B-), no-ToM/Bayesian (T-B+), ToM/bayesian (T+B+) and Tom/non-Bayesian (T+B-) model families.

Mentions: Lastly, we performed a formal model-based analysis of peoples' trial-by-trial choice sequences, in the aim of identifying the most likely learning scenario in both social and non-social framings. In brief, we performed a group-level random-effect Bayesian model comparison (RFX-BMS, [36]) of fourteen different models (cf. Table 2). These include meta-Bayesian ToM models (1-ToM, 2-ToM and 3-ToM), non-Bayesian ToM models (1-Inf and 2-Inf), Bayesian no-ToM models (0-ToM, hBL, 1-BSL, 2-BSL and 3-BSL), as well as non-Bayesian no-ToM models (RL, WSLS, Nash and Volterra decompositions). In what follows, we will exploit these two orthogonal partitions of our model set, namely: T+/T- (which refers to models that include mentalizing or not) and B+/B- (which refers to models that rely upon Bayesian belief updates or not). Note that all models include a bias term that can capture a systematic tendency to prefer one alternative option over the other (within games/sessions). First, we performed Bayesian hypothesis tests to assess the stability of models attribution across conditions. To begin with, we tested the hypothesis that the model family (T+ versus T-) used in the social framing was the same than in the non-social framing, for each opponent. Evidence for the hypothesis was found for the control condition RB (EP = 95%). However, evidence for a difference in model families across framings was found for both 0-ToM (EP = 23%) and 1-ToM (EP = 0%) opponents. The test was inconclusive for 2-ToM (EP = 53%). Then, we tested whether the same family of model was used across opponents in a given framing. In this case, we found strong statistical evidence in favour of stability of model attributions. More precisely, the hypothesis was strongly supported for all between-conditions comparisons (EP>83%), with the exception of comparisons between 2-ToM and RB in the social framing, which yielded weaker evidence (EP = 69%). Overall, this analysis indicates that people's learning rule is mostly framing-dependent (but not opponent-dependent). This motivates our final analysis, which essentially is a framing-specific RFX-BMS. The result of this procedure is depicted on Fig. 6, which shows the exceedance probability of model families in both the social and non-social conditions. We refer the interested reader to Text S1 for quantitative diagnostics of the RFX-BMS approach (cf. fixed-effect analysis and confusion matrices).


The social Bayesian brain: does mentalizing make a difference when we learn?

Devaine M, Hollard G, Daunizeau J - PLoS Comput. Biol. (2014)

Bayesian model comparison.Left: exceedance probabilities of the no-ToM (T-) and ToM (T+) model families (red: non-social framing, blue: social framing). Right: exceedance probabilities of the no-ToM/non-Bayesian (T-B-), no-ToM/Bayesian (T-B+), ToM/bayesian (T+B+) and Tom/non-Bayesian (T+B-) model families.
© Copyright Policy
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC4256068&req=5

pcbi-1003992-g006: Bayesian model comparison.Left: exceedance probabilities of the no-ToM (T-) and ToM (T+) model families (red: non-social framing, blue: social framing). Right: exceedance probabilities of the no-ToM/non-Bayesian (T-B-), no-ToM/Bayesian (T-B+), ToM/bayesian (T+B+) and Tom/non-Bayesian (T+B-) model families.
Mentions: Lastly, we performed a formal model-based analysis of peoples' trial-by-trial choice sequences, in the aim of identifying the most likely learning scenario in both social and non-social framings. In brief, we performed a group-level random-effect Bayesian model comparison (RFX-BMS, [36]) of fourteen different models (cf. Table 2). These include meta-Bayesian ToM models (1-ToM, 2-ToM and 3-ToM), non-Bayesian ToM models (1-Inf and 2-Inf), Bayesian no-ToM models (0-ToM, hBL, 1-BSL, 2-BSL and 3-BSL), as well as non-Bayesian no-ToM models (RL, WSLS, Nash and Volterra decompositions). In what follows, we will exploit these two orthogonal partitions of our model set, namely: T+/T- (which refers to models that include mentalizing or not) and B+/B- (which refers to models that rely upon Bayesian belief updates or not). Note that all models include a bias term that can capture a systematic tendency to prefer one alternative option over the other (within games/sessions). First, we performed Bayesian hypothesis tests to assess the stability of models attribution across conditions. To begin with, we tested the hypothesis that the model family (T+ versus T-) used in the social framing was the same than in the non-social framing, for each opponent. Evidence for the hypothesis was found for the control condition RB (EP = 95%). However, evidence for a difference in model families across framings was found for both 0-ToM (EP = 23%) and 1-ToM (EP = 0%) opponents. The test was inconclusive for 2-ToM (EP = 53%). Then, we tested whether the same family of model was used across opponents in a given framing. In this case, we found strong statistical evidence in favour of stability of model attributions. More precisely, the hypothesis was strongly supported for all between-conditions comparisons (EP>83%), with the exception of comparisons between 2-ToM and RB in the social framing, which yielded weaker evidence (EP = 69%). Overall, this analysis indicates that people's learning rule is mostly framing-dependent (but not opponent-dependent). This motivates our final analysis, which essentially is a framing-specific RFX-BMS. The result of this procedure is depicted on Fig. 6, which shows the exceedance probability of model families in both the social and non-social conditions. We refer the interested reader to Text S1 for quantitative diagnostics of the RFX-BMS approach (cf. fixed-effect analysis and confusion matrices).

Bottom Line: Moreover, we find that participants' choice sequences are best explained by sophisticated mentalizing Bayesian learning models only in the social framing.This study is the first demonstration of the added-value of mentalizing on learning in the context of repeated social interactions.Importantly, our results show that we would not be able to decipher intentional behaviour without a priori attributing mental states to others.

View Article: PubMed Central - PubMed

Affiliation: Brain and Spine Institute, Paris, France; INSERM, Paris, France.

ABSTRACT
When it comes to interpreting others' behaviour, we almost irrepressibly engage in the attribution of mental states (beliefs, emotions…). Such "mentalizing" can become very sophisticated, eventually endowing us with highly adaptive skills such as convincing, teaching or deceiving. Here, sophistication can be captured in terms of the depth of our recursive beliefs, as in "I think that you think that I think…" In this work, we test whether such sophisticated recursive beliefs subtend learning in the context of social interaction. We asked participants to play repeated games against artificial (Bayesian) mentalizing agents, which differ in their sophistication. Critically, we made people believe either that they were playing against each other, or that they were gambling like in a casino. Although both framings are similarly deceiving, participants win against the artificial (sophisticated) mentalizing agents in the social framing of the task, and lose in the non-social framing. Moreover, we find that participants' choice sequences are best explained by sophisticated mentalizing Bayesian learning models only in the social framing. This study is the first demonstration of the added-value of mentalizing on learning in the context of repeated social interactions. Importantly, our results show that we would not be able to decipher intentional behaviour without a priori attributing mental states to others.

Show MeSH