Limits...
Making sense of discrepancies in working memory training experiments: a Monte Carlo simulation.

Moreau D - Front Syst Neurosci (2014)

View Article: PubMed Central - PubMed

Affiliation: Department of Psychology, Princeton University Princeton, NJ, USA.

AUTOMATICALLY GENERATED EXCERPT
Please rate it.

The idea that working memory training can enhance general cognitive abilities has received a lot of attention in the last few years... Some studies have demonstrated far transfer to other cognitive abilities after training, whereas others have failed to replicate these findings (see Melby-Lervåg and Hulme,, for a meta-analytic review)... However, since early studies showing improvements in tasks tapping Gf after working memory training (Jaeggi et al.,, ; Jaušovec and Jaušovec, ), others have consistently failed to replicate these findings (Chooi and Thompson, ; Harrison et al., ; Redick et al., )... These contradicting results created a dichotomy between labs interested in the same trend of research but reaching different conclusions... This is a healthy departure from dichotomized claims about the effectiveness of working memory training, illustrating the importance of more nuanced statements—the same training does not work for everyone, and it is critical to determine what components are required for successful transfer and what components need to be adapted to individual needs... Until we can successfully identify these parameters, training programs will yield differential effects that are difficult to predict... This is often more practical when experimenters have to deal with time constraints, but it can potentially introduce additional confounds... Because the present simulation does not constrain sampling hierarchically, its results will not be influenced by the sampling strategy one might favor... This percentage might seem trivial, but when the population includes three subpopulations of learners (high, medium, low), a simulation with 10,000 draws yield unbalanced samples 5.18% of the time (Figure 1B)... This additional step would ensure balanced samples across experimental conditions at the onset of the study to reduce the potential biases emphasized in this paper... In addition, and because they would be measured before any experimental treatment, initial growth profiles could be used as covariates in the final analyses to refine the interpretation of significant effects... For example, increasing the sample in the Monte Carlo simulation to 40 subjects per cell allows detecting unbalanced samples in 3.47% of cases in the two-subpopulation scenario (11.05% with unequal subpopulations), and in 5.20% of cases in the three-subpopulation scenario (69.47% with unequal subpopulations)... Moreover, the limitation presented herein, as well as its potential remedies, are equally valid to other types of training designs not based on working memory—in fact, I do hope that the paper contributes to an already ongoing shift of focus from general training contents to more individualized programs, taking into account individual differences in cognition... Following this idea serves a dual purpose—it allows designing more effective training programs with applications to clinical and non-clinical populations, particularly important in our aging societies, but also provides suitable environments to test empirical claims and refine current models of cognition.

No MeSH data available.


Distributions of χ-squared contingency table test p-values for a Monte Carlo simulation with 10,000 draws of two samples (N = 20 per cell) in four different scenarios: (A) 2 subpopulations—Equal ratios, (B) 3 subpopulations—Equal ratios, (C) 2 subpopulations—Unequal ratios (60–40%), and (D) 3 subpopulations—Unequal ratios (55–35–10%). Histograms represent distribution frequencies; the orange line depicts density estimates. The blue line represents the threshold for p = 0.05 (all p-values to the left of the line are significant, indicating unbalanced samples).
© Copyright Policy - open-access
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC4151028&req=5

Figure 1: Distributions of χ-squared contingency table test p-values for a Monte Carlo simulation with 10,000 draws of two samples (N = 20 per cell) in four different scenarios: (A) 2 subpopulations—Equal ratios, (B) 3 subpopulations—Equal ratios, (C) 2 subpopulations—Unequal ratios (60–40%), and (D) 3 subpopulations—Unequal ratios (55–35–10%). Histograms represent distribution frequencies; the orange line depicts density estimates. The blue line represents the threshold for p = 0.05 (all p-values to the left of the line are significant, indicating unbalanced samples).

Mentions: Recent evidence, however, suggests different learning curves, or rates of improvement, between individuals, based on distinct neural changes (e.g., Kundu et al., 2013). This is a completely different scenario. Let us assume the general population we are sampling from includes two subpopulations with different rates of improvement (high and low). In this case, a Monte Carlo simulation with 10,000 draws shows that random sampling from the population will yield unbalanced samples 1.74% of the time (Figure 1A). This percentage might seem trivial, but when the population includes three subpopulations of learners (high, medium, low), a simulation with 10,000 draws yield unbalanced samples 5.18% of the time (Figure 1B). The probability rises quickly when more subpopulations are included in the model: the more individuals differ in their ability to learn, the more likely a training experiment is to be affected by sampling error. In fact, ecological populations are likely to be even more heterogeneous, therefore exacerbating this effect. In training experiments, individual differences matter.


Making sense of discrepancies in working memory training experiments: a Monte Carlo simulation.

Moreau D - Front Syst Neurosci (2014)

Distributions of χ-squared contingency table test p-values for a Monte Carlo simulation with 10,000 draws of two samples (N = 20 per cell) in four different scenarios: (A) 2 subpopulations—Equal ratios, (B) 3 subpopulations—Equal ratios, (C) 2 subpopulations—Unequal ratios (60–40%), and (D) 3 subpopulations—Unequal ratios (55–35–10%). Histograms represent distribution frequencies; the orange line depicts density estimates. The blue line represents the threshold for p = 0.05 (all p-values to the left of the line are significant, indicating unbalanced samples).
© Copyright Policy - open-access
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC4151028&req=5

Figure 1: Distributions of χ-squared contingency table test p-values for a Monte Carlo simulation with 10,000 draws of two samples (N = 20 per cell) in four different scenarios: (A) 2 subpopulations—Equal ratios, (B) 3 subpopulations—Equal ratios, (C) 2 subpopulations—Unequal ratios (60–40%), and (D) 3 subpopulations—Unequal ratios (55–35–10%). Histograms represent distribution frequencies; the orange line depicts density estimates. The blue line represents the threshold for p = 0.05 (all p-values to the left of the line are significant, indicating unbalanced samples).
Mentions: Recent evidence, however, suggests different learning curves, or rates of improvement, between individuals, based on distinct neural changes (e.g., Kundu et al., 2013). This is a completely different scenario. Let us assume the general population we are sampling from includes two subpopulations with different rates of improvement (high and low). In this case, a Monte Carlo simulation with 10,000 draws shows that random sampling from the population will yield unbalanced samples 1.74% of the time (Figure 1A). This percentage might seem trivial, but when the population includes three subpopulations of learners (high, medium, low), a simulation with 10,000 draws yield unbalanced samples 5.18% of the time (Figure 1B). The probability rises quickly when more subpopulations are included in the model: the more individuals differ in their ability to learn, the more likely a training experiment is to be affected by sampling error. In fact, ecological populations are likely to be even more heterogeneous, therefore exacerbating this effect. In training experiments, individual differences matter.

View Article: PubMed Central - PubMed

Affiliation: Department of Psychology, Princeton University Princeton, NJ, USA.

AUTOMATICALLY GENERATED EXCERPT
Please rate it.

The idea that working memory training can enhance general cognitive abilities has received a lot of attention in the last few years... Some studies have demonstrated far transfer to other cognitive abilities after training, whereas others have failed to replicate these findings (see Melby-Lervåg and Hulme,, for a meta-analytic review)... However, since early studies showing improvements in tasks tapping Gf after working memory training (Jaeggi et al.,, ; Jaušovec and Jaušovec, ), others have consistently failed to replicate these findings (Chooi and Thompson, ; Harrison et al., ; Redick et al., )... These contradicting results created a dichotomy between labs interested in the same trend of research but reaching different conclusions... This is a healthy departure from dichotomized claims about the effectiveness of working memory training, illustrating the importance of more nuanced statements—the same training does not work for everyone, and it is critical to determine what components are required for successful transfer and what components need to be adapted to individual needs... Until we can successfully identify these parameters, training programs will yield differential effects that are difficult to predict... This is often more practical when experimenters have to deal with time constraints, but it can potentially introduce additional confounds... Because the present simulation does not constrain sampling hierarchically, its results will not be influenced by the sampling strategy one might favor... This percentage might seem trivial, but when the population includes three subpopulations of learners (high, medium, low), a simulation with 10,000 draws yield unbalanced samples 5.18% of the time (Figure 1B)... This additional step would ensure balanced samples across experimental conditions at the onset of the study to reduce the potential biases emphasized in this paper... In addition, and because they would be measured before any experimental treatment, initial growth profiles could be used as covariates in the final analyses to refine the interpretation of significant effects... For example, increasing the sample in the Monte Carlo simulation to 40 subjects per cell allows detecting unbalanced samples in 3.47% of cases in the two-subpopulation scenario (11.05% with unequal subpopulations), and in 5.20% of cases in the three-subpopulation scenario (69.47% with unequal subpopulations)... Moreover, the limitation presented herein, as well as its potential remedies, are equally valid to other types of training designs not based on working memory—in fact, I do hope that the paper contributes to an already ongoing shift of focus from general training contents to more individualized programs, taking into account individual differences in cognition... Following this idea serves a dual purpose—it allows designing more effective training programs with applications to clinical and non-clinical populations, particularly important in our aging societies, but also provides suitable environments to test empirical claims and refine current models of cognition.

No MeSH data available.