Limits...
Using data-driven model-brain mappings to constrain formal models of cognition.

Borst JP, Nijboer M, Taatgen NA, van Rijn H, Anderson JR - PLoS ONE (2015)

Bottom Line: Although such mappings can be based on the experience of the modeler or on a reading of the literature, a formal method is preferred to prevent researcher-based biases.We then validated this mapping by applying it to two new datasets with associated models.The new mapping was at least as powerful as an existing mapping that was based on the literature, and indicated where the models were supported by the data and where they have to be improved.

View Article: PubMed Central - PubMed

Affiliation: Carnegie Mellon University, Dept. of Psychology, Pittsburgh, United States of America; University of Groningen, Dept. of Artificial Intelligence, Groningen, the Netherlands.

ABSTRACT
In this paper we propose a method to create data-driven mappings from components of cognitive models to brain regions. Cognitive models are notoriously hard to evaluate, especially based on behavioral measures alone. Neuroimaging data can provide additional constraints, but this requires a mapping from model components to brain regions. Although such mappings can be based on the experience of the modeler or on a reading of the literature, a formal method is preferred to prevent researcher-based biases. In this paper we used model-based fMRI analysis to create a data-driven model-brain mapping for five modules of the ACT-R cognitive architecture. We then validated this mapping by applying it to two new datasets with associated models. The new mapping was at least as powerful as an existing mapping that was based on the literature, and indicated where the models were supported by the data and where they have to be improved. We conclude that data-driven model-brain mappings can provide strong constraints on cognitive models, and that model-based fMRI is a suitable way to create such mappings.

Show MeSH
Response times of the algebra dataset.Blue bars indicate small heights, orange bars large heights; dark parts of the bars indicate the time until the first mouse click, light parts the time between the first click and clicking the submit button. Error bars indicate the average standard deviation.
© Copyright Policy
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC4352055&req=5

pone.0119673.g005: Response times of the algebra dataset.Blue bars indicate small heights, orange bars large heights; dark parts of the bars indicate the time until the first mouse click, light parts the time between the first click and clicking the submit button. Error bars indicate the average standard deviation.

Mentions: Fig. 5 shows the response times, on the left for the data, on the right for the model. Height had a substantial effect on response times, with large heights (orange bars) leading to longer response times than small heights (blue bars). Given that the height determines the number of terms in the addition, this is not surprising. A repeated-measure ANOVA confirmed this effect: F(1,17) = 190.9, p < .001. In addition to height, base also had a positive effect on RTs, with large bases leading to longer RTs (F(1,17) = 20.5, p < .001). This effect is explained by the slightly larger numbers that have to be added, more often resulting in carries. Comparing the ‘cognitive phase’ (dark parts of the bars, before the first number was entered) to the response phase (light parts, first number until submit button) indicates that the effects on RT were almost completely due to the cognitive phase. In addition, the average standard deviations indicate that most of the variability was also contained in the cognitive phase. The model captured all these effects; the biggest discrepancy between model and empirical data is the lower variability in the model’s cognitive phase, especially for the large heights.


Using data-driven model-brain mappings to constrain formal models of cognition.

Borst JP, Nijboer M, Taatgen NA, van Rijn H, Anderson JR - PLoS ONE (2015)

Response times of the algebra dataset.Blue bars indicate small heights, orange bars large heights; dark parts of the bars indicate the time until the first mouse click, light parts the time between the first click and clicking the submit button. Error bars indicate the average standard deviation.
© Copyright Policy
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC4352055&req=5

pone.0119673.g005: Response times of the algebra dataset.Blue bars indicate small heights, orange bars large heights; dark parts of the bars indicate the time until the first mouse click, light parts the time between the first click and clicking the submit button. Error bars indicate the average standard deviation.
Mentions: Fig. 5 shows the response times, on the left for the data, on the right for the model. Height had a substantial effect on response times, with large heights (orange bars) leading to longer response times than small heights (blue bars). Given that the height determines the number of terms in the addition, this is not surprising. A repeated-measure ANOVA confirmed this effect: F(1,17) = 190.9, p < .001. In addition to height, base also had a positive effect on RTs, with large bases leading to longer RTs (F(1,17) = 20.5, p < .001). This effect is explained by the slightly larger numbers that have to be added, more often resulting in carries. Comparing the ‘cognitive phase’ (dark parts of the bars, before the first number was entered) to the response phase (light parts, first number until submit button) indicates that the effects on RT were almost completely due to the cognitive phase. In addition, the average standard deviations indicate that most of the variability was also contained in the cognitive phase. The model captured all these effects; the biggest discrepancy between model and empirical data is the lower variability in the model’s cognitive phase, especially for the large heights.

Bottom Line: Although such mappings can be based on the experience of the modeler or on a reading of the literature, a formal method is preferred to prevent researcher-based biases.We then validated this mapping by applying it to two new datasets with associated models.The new mapping was at least as powerful as an existing mapping that was based on the literature, and indicated where the models were supported by the data and where they have to be improved.

View Article: PubMed Central - PubMed

Affiliation: Carnegie Mellon University, Dept. of Psychology, Pittsburgh, United States of America; University of Groningen, Dept. of Artificial Intelligence, Groningen, the Netherlands.

ABSTRACT
In this paper we propose a method to create data-driven mappings from components of cognitive models to brain regions. Cognitive models are notoriously hard to evaluate, especially based on behavioral measures alone. Neuroimaging data can provide additional constraints, but this requires a mapping from model components to brain regions. Although such mappings can be based on the experience of the modeler or on a reading of the literature, a formal method is preferred to prevent researcher-based biases. In this paper we used model-based fMRI analysis to create a data-driven model-brain mapping for five modules of the ACT-R cognitive architecture. We then validated this mapping by applying it to two new datasets with associated models. The new mapping was at least as powerful as an existing mapping that was based on the literature, and indicated where the models were supported by the data and where they have to be improved. We conclude that data-driven model-brain mappings can provide strong constraints on cognitive models, and that model-based fMRI is a suitable way to create such mappings.

Show MeSH