Limits...
Deciding not to decide: computational and neural evidence for hidden behavior in sequential choice.

Gluth S, Rieskamp J, Büchel C - PLoS Comput. Biol. (2013)

Bottom Line: Understanding the cognitive and neural processes that underlie human decision making requires the successful prediction of how, but also of when, people choose.Standard SSM implementations did not describe RT distributions adequately.Our results show how computational modeling of decisions and RTs support a deeper understanding of the hidden dynamics in cognition.

View Article: PubMed Central - PubMed

Affiliation: Department of Systems Neuroscience, University Medical Center Hamburg-Eppendorf, Hamburg, Germany ; Department of Psychology, University of Basel, Basel, Switzerland.

ABSTRACT
Understanding the cognitive and neural processes that underlie human decision making requires the successful prediction of how, but also of when, people choose. Sequential sampling models (SSMs) have greatly advanced the decision sciences by assuming decisions to emerge from a bounded evidence accumulation process so that response times (RTs) become predictable. Here, we demonstrate a difficulty of SSMs that occurs when people are not forced to respond at once but are allowed to sample information sequentially: The decision maker might decide to delay the choice and terminate the accumulation process temporarily, a scenario not accounted for by the standard SSM approach. We developed several SSMs for predicting RTs from two independent samples of an electroencephalography (EEG) and a functional magnetic resonance imaging (fMRI) study. In these studies, participants bought or rejected fictitious stocks based on sequentially presented cues and were free to respond at any time. Standard SSM implementations did not describe RT distributions adequately. However, by adding a mechanism for postponing decisions to the model we obtained an accurate fit to the data. Time-frequency analysis of EEG data revealed alternating states of de- and increasing oscillatory power in beta-band frequencies (14-30 Hz), indicating that responses were repeatedly prepared and inhibited and thus lending further support for the existence of a decision not to decide. Finally, the extended model accounted for the results of an adapted version of our paradigm in which participants had to press a button for sampling more information. Our results show how computational modeling of decisions and RTs support a deeper understanding of the hidden dynamics in cognition.

Show MeSH

Related in: MedlinePlus

Model comparison based on the number of ratings.(A) Relative frequencies of buy (blue) and reject (red) decisions as well as model predictions of M1standard and M1*evidence when estimated to predict the number of acquired ratings, separately for the ratings from 1 to 6. (B) Average choice point in terms of rating number (x-axis) and mean RT (y-axis) per participant together with the respective predictions from the models M1standard and M1*evidence. Horizontal and vertical lines at the data points represent 95% confidence intervals.
© Copyright Policy
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC3814623&req=5

pcbi-1003309-g003: Model comparison based on the number of ratings.(A) Relative frequencies of buy (blue) and reject (red) decisions as well as model predictions of M1standard and M1*evidence when estimated to predict the number of acquired ratings, separately for the ratings from 1 to 6. (B) Average choice point in terms of rating number (x-axis) and mean RT (y-axis) per participant together with the respective predictions from the models M1standard and M1*evidence. Horizontal and vertical lines at the data points represent 95% confidence intervals.

Mentions: We first estimated the computational models with respect to the probability with which they predicted the observed choice (buy or reject) at the observed rating number (from 1 to 6) as realized in our previous studies [15], [16]. On the one hand, this was done to ensure that the new implementations perform roughly as well as our previous SSM. In addition, we intended to demonstrate that the new models' predictive accuracies are similar as long as the RT distributions are not considered. In terms of the Bayesian Information Criterion (BIC), the model from our previous studies (“M0”) outperforms the new models (all “M1”) (Table 1). However, the new models do a comparatively good job and predict choices (∼90%) and number of sampled ratings (∼65%) almost as accurate as M0. Most importantly, there are virtually no differences between the new candidates that include a decision not to decide (all “M1*”) and the model without this decision (“M1standard”) (Table 1; Figure 3A). Figure 3B shows the average choice rating (i.e., the rating at which the choice was made) and RT (i.e., the exact time of the decision within the choice rating) per participant together with the predictions from the new models M1standard and M1*evidence. With respect to the choice ratings (x-axis) both models lie in the range of the data, but with respect to RTs (y-axis) both models predict mean RTs that are evidently too high (except for some values of the M1*evidence model). This is to be expected as the models were estimated only on the basis of the choices and without using RTs. Taken together, the new models predict choices and how many pieces of information are acquired, but fitting the number of sampled ratings alone leaves open whether the assumption of a decision not to decide provides any advantage in describing the cognitive process of sequential value-based decisions.


Deciding not to decide: computational and neural evidence for hidden behavior in sequential choice.

Gluth S, Rieskamp J, Büchel C - PLoS Comput. Biol. (2013)

Model comparison based on the number of ratings.(A) Relative frequencies of buy (blue) and reject (red) decisions as well as model predictions of M1standard and M1*evidence when estimated to predict the number of acquired ratings, separately for the ratings from 1 to 6. (B) Average choice point in terms of rating number (x-axis) and mean RT (y-axis) per participant together with the respective predictions from the models M1standard and M1*evidence. Horizontal and vertical lines at the data points represent 95% confidence intervals.
© Copyright Policy
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC3814623&req=5

pcbi-1003309-g003: Model comparison based on the number of ratings.(A) Relative frequencies of buy (blue) and reject (red) decisions as well as model predictions of M1standard and M1*evidence when estimated to predict the number of acquired ratings, separately for the ratings from 1 to 6. (B) Average choice point in terms of rating number (x-axis) and mean RT (y-axis) per participant together with the respective predictions from the models M1standard and M1*evidence. Horizontal and vertical lines at the data points represent 95% confidence intervals.
Mentions: We first estimated the computational models with respect to the probability with which they predicted the observed choice (buy or reject) at the observed rating number (from 1 to 6) as realized in our previous studies [15], [16]. On the one hand, this was done to ensure that the new implementations perform roughly as well as our previous SSM. In addition, we intended to demonstrate that the new models' predictive accuracies are similar as long as the RT distributions are not considered. In terms of the Bayesian Information Criterion (BIC), the model from our previous studies (“M0”) outperforms the new models (all “M1”) (Table 1). However, the new models do a comparatively good job and predict choices (∼90%) and number of sampled ratings (∼65%) almost as accurate as M0. Most importantly, there are virtually no differences between the new candidates that include a decision not to decide (all “M1*”) and the model without this decision (“M1standard”) (Table 1; Figure 3A). Figure 3B shows the average choice rating (i.e., the rating at which the choice was made) and RT (i.e., the exact time of the decision within the choice rating) per participant together with the predictions from the new models M1standard and M1*evidence. With respect to the choice ratings (x-axis) both models lie in the range of the data, but with respect to RTs (y-axis) both models predict mean RTs that are evidently too high (except for some values of the M1*evidence model). This is to be expected as the models were estimated only on the basis of the choices and without using RTs. Taken together, the new models predict choices and how many pieces of information are acquired, but fitting the number of sampled ratings alone leaves open whether the assumption of a decision not to decide provides any advantage in describing the cognitive process of sequential value-based decisions.

Bottom Line: Understanding the cognitive and neural processes that underlie human decision making requires the successful prediction of how, but also of when, people choose.Standard SSM implementations did not describe RT distributions adequately.Our results show how computational modeling of decisions and RTs support a deeper understanding of the hidden dynamics in cognition.

View Article: PubMed Central - PubMed

Affiliation: Department of Systems Neuroscience, University Medical Center Hamburg-Eppendorf, Hamburg, Germany ; Department of Psychology, University of Basel, Basel, Switzerland.

ABSTRACT
Understanding the cognitive and neural processes that underlie human decision making requires the successful prediction of how, but also of when, people choose. Sequential sampling models (SSMs) have greatly advanced the decision sciences by assuming decisions to emerge from a bounded evidence accumulation process so that response times (RTs) become predictable. Here, we demonstrate a difficulty of SSMs that occurs when people are not forced to respond at once but are allowed to sample information sequentially: The decision maker might decide to delay the choice and terminate the accumulation process temporarily, a scenario not accounted for by the standard SSM approach. We developed several SSMs for predicting RTs from two independent samples of an electroencephalography (EEG) and a functional magnetic resonance imaging (fMRI) study. In these studies, participants bought or rejected fictitious stocks based on sequentially presented cues and were free to respond at any time. Standard SSM implementations did not describe RT distributions adequately. However, by adding a mechanism for postponing decisions to the model we obtained an accurate fit to the data. Time-frequency analysis of EEG data revealed alternating states of de- and increasing oscillatory power in beta-band frequencies (14-30 Hz), indicating that responses were repeatedly prepared and inhibited and thus lending further support for the existence of a decision not to decide. Finally, the extended model accounted for the results of an adapted version of our paradigm in which participants had to press a button for sampling more information. Our results show how computational modeling of decisions and RTs support a deeper understanding of the hidden dynamics in cognition.

Show MeSH
Related in: MedlinePlus