Limits...
No evidence for an item limit in change detection.

Keshvari S, van den Berg R, Ma WJ - PLoS Comput. Biol. (2013)

Bottom Line: Recent findings force us to consider the alternative view that working memory is limited by the precision in stimulus encoding, with mean precision decreasing with increasing set size ("continuous-resource models").Most previous studies that used the change detection paradigm have ignored effects of limited encoding precision by using highly discriminable stimuli and only large changes.In a rigorous comparison of five models, we found no evidence of an item limit.

View Article: PubMed Central - PubMed

Affiliation: Department of Neuroscience, Baylor College of Medicine, Houston, Texas, United States of America.

ABSTRACT
Change detection is a classic paradigm that has been used for decades to argue that working memory can hold no more than a fixed number of items ("item-limit models"). Recent findings force us to consider the alternative view that working memory is limited by the precision in stimulus encoding, with mean precision decreasing with increasing set size ("continuous-resource models"). Most previous studies that used the change detection paradigm have ignored effects of limited encoding precision by using highly discriminable stimuli and only large changes. We conducted two change detection experiments (orientation and color) in which change magnitudes were drawn from a wide range, including small changes. In a rigorous comparison of five models, we found no evidence of an item limit. Instead, human change detection performance was best explained by a continuous-resource model in which encoding precision is variable across items and trials even at a given set size. This model accounts for comparison errors in a principled, probabilistic manner. Our findings sharply challenge the theoretical basis for most neural studies of working memory capacity.

Show MeSH
Bayesian model comparison.Model log likelihood of each model minus that of the VP model (mean ± s.e.m.). A value of −x means that the data are ex times more probable under the VP model.
© Copyright Policy
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC3585403&req=5

pcbi-1002927-g004: Bayesian model comparison.Model log likelihood of each model minus that of the VP model (mean ± s.e.m.). A value of −x means that the data are ex times more probable under the VP model.

Mentions: The RMS errors reported so far are rather arbitrary descriptive statistics. To compare the models in a more principled (though less visualizable) fashion, we performed Bayesian model comparison, also called Bayes factors [43]–[44] (see Text S1). This method returns the likelihood of each model given the data and has three desirable properties: it uses all data instead of only a subset (like cross-validation would) or summary; it does not solely rely on point estimates of the parameters but integrates over parameter space, thereby accounting for the model's robustness against variations in the parameters; it automatically incorporates a correction for the number of free parameters. We found that the log likelihood of the VP model exceeds that of the IP, SA, SR, and EP models by 97±11, 7.2±3.5, 7.4±3.7, and 19±3, respectively (Fig. 4). This constitutes strong evidence in favor of the VP model, for example according to Jeffreys' scale [45]. Based on our data, we can convincingly rule out the three item-limit models (IP, SA, and SR) as well as the equal-precision (EP) model, as descriptions of human change detection behavior.


No evidence for an item limit in change detection.

Keshvari S, van den Berg R, Ma WJ - PLoS Comput. Biol. (2013)

Bayesian model comparison.Model log likelihood of each model minus that of the VP model (mean ± s.e.m.). A value of −x means that the data are ex times more probable under the VP model.
© Copyright Policy
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC3585403&req=5

pcbi-1002927-g004: Bayesian model comparison.Model log likelihood of each model minus that of the VP model (mean ± s.e.m.). A value of −x means that the data are ex times more probable under the VP model.
Mentions: The RMS errors reported so far are rather arbitrary descriptive statistics. To compare the models in a more principled (though less visualizable) fashion, we performed Bayesian model comparison, also called Bayes factors [43]–[44] (see Text S1). This method returns the likelihood of each model given the data and has three desirable properties: it uses all data instead of only a subset (like cross-validation would) or summary; it does not solely rely on point estimates of the parameters but integrates over parameter space, thereby accounting for the model's robustness against variations in the parameters; it automatically incorporates a correction for the number of free parameters. We found that the log likelihood of the VP model exceeds that of the IP, SA, SR, and EP models by 97±11, 7.2±3.5, 7.4±3.7, and 19±3, respectively (Fig. 4). This constitutes strong evidence in favor of the VP model, for example according to Jeffreys' scale [45]. Based on our data, we can convincingly rule out the three item-limit models (IP, SA, and SR) as well as the equal-precision (EP) model, as descriptions of human change detection behavior.

Bottom Line: Recent findings force us to consider the alternative view that working memory is limited by the precision in stimulus encoding, with mean precision decreasing with increasing set size ("continuous-resource models").Most previous studies that used the change detection paradigm have ignored effects of limited encoding precision by using highly discriminable stimuli and only large changes.In a rigorous comparison of five models, we found no evidence of an item limit.

View Article: PubMed Central - PubMed

Affiliation: Department of Neuroscience, Baylor College of Medicine, Houston, Texas, United States of America.

ABSTRACT
Change detection is a classic paradigm that has been used for decades to argue that working memory can hold no more than a fixed number of items ("item-limit models"). Recent findings force us to consider the alternative view that working memory is limited by the precision in stimulus encoding, with mean precision decreasing with increasing set size ("continuous-resource models"). Most previous studies that used the change detection paradigm have ignored effects of limited encoding precision by using highly discriminable stimuli and only large changes. We conducted two change detection experiments (orientation and color) in which change magnitudes were drawn from a wide range, including small changes. In a rigorous comparison of five models, we found no evidence of an item limit. Instead, human change detection performance was best explained by a continuous-resource model in which encoding precision is variable across items and trials even at a given set size. This model accounts for comparison errors in a principled, probabilistic manner. Our findings sharply challenge the theoretical basis for most neural studies of working memory capacity.

Show MeSH