Limits...
The social Bayesian brain: does mentalizing make a difference when we learn?

Devaine M, Hollard G, Daunizeau J - PLoS Comput. Biol. (2014)

Bottom Line: Moreover, we find that participants' choice sequences are best explained by sophisticated mentalizing Bayesian learning models only in the social framing.This study is the first demonstration of the added-value of mentalizing on learning in the context of repeated social interactions.Importantly, our results show that we would not be able to decipher intentional behaviour without a priori attributing mental states to others.

View Article: PubMed Central - PubMed

Affiliation: Brain and Spine Institute, Paris, France; INSERM, Paris, France.

ABSTRACT
When it comes to interpreting others' behaviour, we almost irrepressibly engage in the attribution of mental states (beliefs, emotions…). Such "mentalizing" can become very sophisticated, eventually endowing us with highly adaptive skills such as convincing, teaching or deceiving. Here, sophistication can be captured in terms of the depth of our recursive beliefs, as in "I think that you think that I think…" In this work, we test whether such sophisticated recursive beliefs subtend learning in the context of social interaction. We asked participants to play repeated games against artificial (Bayesian) mentalizing agents, which differ in their sophistication. Critically, we made people believe either that they were playing against each other, or that they were gambling like in a casino. Although both framings are similarly deceiving, participants win against the artificial (sophisticated) mentalizing agents in the social framing of the task, and lose in the non-social framing. Moreover, we find that participants' choice sequences are best explained by sophisticated mentalizing Bayesian learning models only in the social framing. This study is the first demonstration of the added-value of mentalizing on learning in the context of repeated social interactions. Importantly, our results show that we would not be able to decipher intentional behaviour without a priori attributing mental states to others.

Show MeSH
Volterra decomposition of k-ToM's response.Left: impulse response to k-ToM's own action (x-axis: lag , y-axis: Volterra weight ). Right: impulse response to k-ToM's opponent's action. ToM sophistication levels are colour-coded (blue: 0-ToM, green: 1-ToM, red: 2-ToM, magenta: 3-ToM). The grey shaded area denotes chance level.
© Copyright Policy
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC4256068&req=5

pcbi-1003992-g001: Volterra decomposition of k-ToM's response.Left: impulse response to k-ToM's own action (x-axis: lag , y-axis: Volterra weight ). Right: impulse response to k-ToM's opponent's action. ToM sophistication levels are colour-coded (blue: 0-ToM, green: 1-ToM, red: 2-ToM, magenta: 3-ToM). The grey shaded area denotes chance level.

Mentions: At this point, one may not have a clear intuition about how such k-ToM agents react to their opponents' choices. We thus performed Volterra decompositions of simulated choice sequences of artificial k-ToM agents playing "hide and seek" against a random opponent. In our context, this means regressing k-ToM's simulated choices onto (i) her opponent's past choices, and (ii) her own past choices (see Text S1). In brief, a positive Volterra weight captures a tendency to reproduce or copy the corresponding action. Fig. 1 shows the estimated Volterra kernels of k-ToM agents, averaged across a thousand Monte-Carlo simulations. Chance level was derived as the extremum Volterra weights estimated for a random choice sequence. We also evaluate Volterra's fit accuracy, in terms of the percentage of correct choice predictions.


The social Bayesian brain: does mentalizing make a difference when we learn?

Devaine M, Hollard G, Daunizeau J - PLoS Comput. Biol. (2014)

Volterra decomposition of k-ToM's response.Left: impulse response to k-ToM's own action (x-axis: lag , y-axis: Volterra weight ). Right: impulse response to k-ToM's opponent's action. ToM sophistication levels are colour-coded (blue: 0-ToM, green: 1-ToM, red: 2-ToM, magenta: 3-ToM). The grey shaded area denotes chance level.
© Copyright Policy
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC4256068&req=5

pcbi-1003992-g001: Volterra decomposition of k-ToM's response.Left: impulse response to k-ToM's own action (x-axis: lag , y-axis: Volterra weight ). Right: impulse response to k-ToM's opponent's action. ToM sophistication levels are colour-coded (blue: 0-ToM, green: 1-ToM, red: 2-ToM, magenta: 3-ToM). The grey shaded area denotes chance level.
Mentions: At this point, one may not have a clear intuition about how such k-ToM agents react to their opponents' choices. We thus performed Volterra decompositions of simulated choice sequences of artificial k-ToM agents playing "hide and seek" against a random opponent. In our context, this means regressing k-ToM's simulated choices onto (i) her opponent's past choices, and (ii) her own past choices (see Text S1). In brief, a positive Volterra weight captures a tendency to reproduce or copy the corresponding action. Fig. 1 shows the estimated Volterra kernels of k-ToM agents, averaged across a thousand Monte-Carlo simulations. Chance level was derived as the extremum Volterra weights estimated for a random choice sequence. We also evaluate Volterra's fit accuracy, in terms of the percentage of correct choice predictions.

Bottom Line: Moreover, we find that participants' choice sequences are best explained by sophisticated mentalizing Bayesian learning models only in the social framing.This study is the first demonstration of the added-value of mentalizing on learning in the context of repeated social interactions.Importantly, our results show that we would not be able to decipher intentional behaviour without a priori attributing mental states to others.

View Article: PubMed Central - PubMed

Affiliation: Brain and Spine Institute, Paris, France; INSERM, Paris, France.

ABSTRACT
When it comes to interpreting others' behaviour, we almost irrepressibly engage in the attribution of mental states (beliefs, emotions…). Such "mentalizing" can become very sophisticated, eventually endowing us with highly adaptive skills such as convincing, teaching or deceiving. Here, sophistication can be captured in terms of the depth of our recursive beliefs, as in "I think that you think that I think…" In this work, we test whether such sophisticated recursive beliefs subtend learning in the context of social interaction. We asked participants to play repeated games against artificial (Bayesian) mentalizing agents, which differ in their sophistication. Critically, we made people believe either that they were playing against each other, or that they were gambling like in a casino. Although both framings are similarly deceiving, participants win against the artificial (sophisticated) mentalizing agents in the social framing of the task, and lose in the non-social framing. Moreover, we find that participants' choice sequences are best explained by sophisticated mentalizing Bayesian learning models only in the social framing. This study is the first demonstration of the added-value of mentalizing on learning in the context of repeated social interactions. Importantly, our results show that we would not be able to decipher intentional behaviour without a priori attributing mental states to others.

Show MeSH