Limits...
Is Model Fitting Necessary for Model-Based fMRI?

Wilson RC, Niv Y - PLoS Comput. Biol. (2015)

Bottom Line: With this method, theoretical models of behavior can be leveraged to find the brain structures underlying variables from specific algorithms, such as prediction errors in reinforcement learning.We found that even gross errors in the learning rate lead to only minute changes in the neural results.While these specific results pertain only to the effect of learning rate in simple reinforcement learning models, we provide a template for testing for effects of different parameters in other models.

View Article: PubMed Central - PubMed

Affiliation: Department of Psychology and Cognitive Science Program, University of Arizona, Tucson Arizona, United States of America.

ABSTRACT
Model-based analysis of fMRI data is an important tool for investigating the computational role of different brain regions. With this method, theoretical models of behavior can be leveraged to find the brain structures underlying variables from specific algorithms, such as prediction errors in reinforcement learning. One potential weakness with this approach is that models often have free parameters and thus the results of the analysis may depend on how these free parameters are set. In this work we asked whether this hypothetical weakness is a problem in practice. We first developed general closed-form expressions for the relationship between results of fMRI analyses using different regressors, e.g., one corresponding to the true process underlying the measured data and one a model-derived approximation of the true generative regressor. Then, as a specific test case, we examined the sensitivity of model-based fMRI to the learning rate parameter in reinforcement learning, both in theory and in two previously-published datasets. We found that even gross errors in the learning rate lead to only minute changes in the neural results. Our findings thus suggest that precise model fitting is not always necessary for model-based fMRI. They also highlight the difficulty in using fMRI data for arbitrating between different models or model parameters. While these specific results pertain only to the effect of learning rate in simple reinforcement learning models, we provide a template for testing for effects of different parameters in other models.

No MeSH data available.


Related in: MedlinePlus

An example drifting reward distribution.(A) Evolution of the mean mt over time, t, diffusing with a drift standard deviation σd. The decay, γ, is indicated by the gray arrows and the shaded region indicates the standard deviation of the Gaussian noise distribution, σn. (B) A set of rewards sampled from the distribution in panel A.
© Copyright Policy
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC4472514&req=5

pcbi.1004237.g006: An example drifting reward distribution.(A) Evolution of the mean mt over time, t, diffusing with a drift standard deviation σd. The decay, γ, is indicated by the gray arrows and the shaded region indicates the standard deviation of the Gaussian noise distribution, σn. (B) A set of rewards sampled from the distribution in panel A.

Mentions: Our approach can also be applied to scenarios in which the reward distribution is not fixed. To illustrate, we analyze experiments with rewards that are drawn from a Gaussian distribution whose mean, mt, is generated by a discretized Ornstein-Uhlenbeck process (Fig 6) [31]. Specifically, mt, undergoes a random walk defined bymt+1=γmt+nt(20)where nt is zero mean noise with drift variance , and γ (< 1) is a decay parameter. Because γ is smaller than one, the mean tends to decay to zero over time (illustrated by the arrows in Fig 6A). This helps to keep the means of different options from diverging too far as the experiment progresses.


Is Model Fitting Necessary for Model-Based fMRI?

Wilson RC, Niv Y - PLoS Comput. Biol. (2015)

An example drifting reward distribution.(A) Evolution of the mean mt over time, t, diffusing with a drift standard deviation σd. The decay, γ, is indicated by the gray arrows and the shaded region indicates the standard deviation of the Gaussian noise distribution, σn. (B) A set of rewards sampled from the distribution in panel A.
© Copyright Policy
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC4472514&req=5

pcbi.1004237.g006: An example drifting reward distribution.(A) Evolution of the mean mt over time, t, diffusing with a drift standard deviation σd. The decay, γ, is indicated by the gray arrows and the shaded region indicates the standard deviation of the Gaussian noise distribution, σn. (B) A set of rewards sampled from the distribution in panel A.
Mentions: Our approach can also be applied to scenarios in which the reward distribution is not fixed. To illustrate, we analyze experiments with rewards that are drawn from a Gaussian distribution whose mean, mt, is generated by a discretized Ornstein-Uhlenbeck process (Fig 6) [31]. Specifically, mt, undergoes a random walk defined bymt+1=γmt+nt(20)where nt is zero mean noise with drift variance , and γ (< 1) is a decay parameter. Because γ is smaller than one, the mean tends to decay to zero over time (illustrated by the arrows in Fig 6A). This helps to keep the means of different options from diverging too far as the experiment progresses.

Bottom Line: With this method, theoretical models of behavior can be leveraged to find the brain structures underlying variables from specific algorithms, such as prediction errors in reinforcement learning.We found that even gross errors in the learning rate lead to only minute changes in the neural results.While these specific results pertain only to the effect of learning rate in simple reinforcement learning models, we provide a template for testing for effects of different parameters in other models.

View Article: PubMed Central - PubMed

Affiliation: Department of Psychology and Cognitive Science Program, University of Arizona, Tucson Arizona, United States of America.

ABSTRACT
Model-based analysis of fMRI data is an important tool for investigating the computational role of different brain regions. With this method, theoretical models of behavior can be leveraged to find the brain structures underlying variables from specific algorithms, such as prediction errors in reinforcement learning. One potential weakness with this approach is that models often have free parameters and thus the results of the analysis may depend on how these free parameters are set. In this work we asked whether this hypothetical weakness is a problem in practice. We first developed general closed-form expressions for the relationship between results of fMRI analyses using different regressors, e.g., one corresponding to the true process underlying the measured data and one a model-derived approximation of the true generative regressor. Then, as a specific test case, we examined the sensitivity of model-based fMRI to the learning rate parameter in reinforcement learning, both in theory and in two previously-published datasets. We found that even gross errors in the learning rate lead to only minute changes in the neural results. Our findings thus suggest that precise model fitting is not always necessary for model-based fMRI. They also highlight the difficulty in using fMRI data for arbitrating between different models or model parameters. While these specific results pertain only to the effect of learning rate in simple reinforcement learning models, we provide a template for testing for effects of different parameters in other models.

No MeSH data available.


Related in: MedlinePlus