Limits...
Is Model Fitting Necessary for Model-Based fMRI?

Wilson RC, Niv Y - PLoS Comput. Biol. (2015)

Bottom Line: With this method, theoretical models of behavior can be leveraged to find the brain structures underlying variables from specific algorithms, such as prediction errors in reinforcement learning.We found that even gross errors in the learning rate lead to only minute changes in the neural results.While these specific results pertain only to the effect of learning rate in simple reinforcement learning models, we provide a template for testing for effects of different parameters in other models.

View Article: PubMed Central - PubMed

Affiliation: Department of Psychology and Cognitive Science Program, University of Arizona, Tucson Arizona, United States of America.

ABSTRACT
Model-based analysis of fMRI data is an important tool for investigating the computational role of different brain regions. With this method, theoretical models of behavior can be leveraged to find the brain structures underlying variables from specific algorithms, such as prediction errors in reinforcement learning. One potential weakness with this approach is that models often have free parameters and thus the results of the analysis may depend on how these free parameters are set. In this work we asked whether this hypothetical weakness is a problem in practice. We first developed general closed-form expressions for the relationship between results of fMRI analyses using different regressors, e.g., one corresponding to the true process underlying the measured data and one a model-derived approximation of the true generative regressor. Then, as a specific test case, we examined the sensitivity of model-based fMRI to the learning rate parameter in reinforcement learning, both in theory and in two previously-published datasets. We found that even gross errors in the learning rate lead to only minute changes in the neural results. Our findings thus suggest that precise model fitting is not always necessary for model-based fMRI. They also highlight the difficulty in using fMRI data for arbitrating between different models or model parameters. While these specific results pertain only to the effect of learning rate in simple reinforcement learning models, we provide a template for testing for effects of different parameters in other models.

No MeSH data available.


Related in: MedlinePlus

Insensitivity of value (A) and prediction error (B) regressors to the fit learning rate as a function of decay of the reward mean to zero, γ, and the drift variance to noise variance ratio of the reward mean, σd/σn, in experiments with drifting rewards.The three black crosses indicate the parameter values in the examples in Fig 7.
© Copyright Policy
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC4472514&req=5

pcbi.1004237.g008: Insensitivity of value (A) and prediction error (B) regressors to the fit learning rate as a function of decay of the reward mean to zero, γ, and the drift variance to noise variance ratio of the reward mean, σd/σn, in experiments with drifting rewards.The three black crosses indicate the parameter values in the examples in Fig 7.

Mentions: To explore the parameter space more thoroughly, we quantified the ‘insensitivity to learning rate’ as the fraction of (αg, αf)-space in which the correlations are greater than 0.7. This metric is 1 when the correlations are only weakly dependent on learning rate (as for the prediction error in the case of fixed rewards) and 0 when they are exquisitely sensitive. Fig 8 shows this metric as a function of the two parameters, γ and σd/σn, for the value and prediction error regressors. The plot demonstrates the somewhat reciprocal relationship between ρ(Vg, Vf) and ρ(δg, δf): when prediction errors have higher sensitivity to learning rate, the values tend to have lower sensitivity, and vice versa. Thus, while the sensitivity to learning rate can be tuned, there is a tradeoff between sensitivity to model-based regressors for prediction errors and value.


Is Model Fitting Necessary for Model-Based fMRI?

Wilson RC, Niv Y - PLoS Comput. Biol. (2015)

Insensitivity of value (A) and prediction error (B) regressors to the fit learning rate as a function of decay of the reward mean to zero, γ, and the drift variance to noise variance ratio of the reward mean, σd/σn, in experiments with drifting rewards.The three black crosses indicate the parameter values in the examples in Fig 7.
© Copyright Policy
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC4472514&req=5

pcbi.1004237.g008: Insensitivity of value (A) and prediction error (B) regressors to the fit learning rate as a function of decay of the reward mean to zero, γ, and the drift variance to noise variance ratio of the reward mean, σd/σn, in experiments with drifting rewards.The three black crosses indicate the parameter values in the examples in Fig 7.
Mentions: To explore the parameter space more thoroughly, we quantified the ‘insensitivity to learning rate’ as the fraction of (αg, αf)-space in which the correlations are greater than 0.7. This metric is 1 when the correlations are only weakly dependent on learning rate (as for the prediction error in the case of fixed rewards) and 0 when they are exquisitely sensitive. Fig 8 shows this metric as a function of the two parameters, γ and σd/σn, for the value and prediction error regressors. The plot demonstrates the somewhat reciprocal relationship between ρ(Vg, Vf) and ρ(δg, δf): when prediction errors have higher sensitivity to learning rate, the values tend to have lower sensitivity, and vice versa. Thus, while the sensitivity to learning rate can be tuned, there is a tradeoff between sensitivity to model-based regressors for prediction errors and value.

Bottom Line: With this method, theoretical models of behavior can be leveraged to find the brain structures underlying variables from specific algorithms, such as prediction errors in reinforcement learning.We found that even gross errors in the learning rate lead to only minute changes in the neural results.While these specific results pertain only to the effect of learning rate in simple reinforcement learning models, we provide a template for testing for effects of different parameters in other models.

View Article: PubMed Central - PubMed

Affiliation: Department of Psychology and Cognitive Science Program, University of Arizona, Tucson Arizona, United States of America.

ABSTRACT
Model-based analysis of fMRI data is an important tool for investigating the computational role of different brain regions. With this method, theoretical models of behavior can be leveraged to find the brain structures underlying variables from specific algorithms, such as prediction errors in reinforcement learning. One potential weakness with this approach is that models often have free parameters and thus the results of the analysis may depend on how these free parameters are set. In this work we asked whether this hypothetical weakness is a problem in practice. We first developed general closed-form expressions for the relationship between results of fMRI analyses using different regressors, e.g., one corresponding to the true process underlying the measured data and one a model-derived approximation of the true generative regressor. Then, as a specific test case, we examined the sensitivity of model-based fMRI to the learning rate parameter in reinforcement learning, both in theory and in two previously-published datasets. We found that even gross errors in the learning rate lead to only minute changes in the neural results. Our findings thus suggest that precise model fitting is not always necessary for model-based fMRI. They also highlight the difficulty in using fMRI data for arbitrating between different models or model parameters. While these specific results pertain only to the effect of learning rate in simple reinforcement learning models, we provide a template for testing for effects of different parameters in other models.

No MeSH data available.


Related in: MedlinePlus