Limits...
Predictive modelling using neuroimaging data in the presence of confounds

View Article: PubMed Central - PubMed

ABSTRACT

When training predictive models from neuroimaging data, we typically have available non-imaging variables such as age and gender that affect the imaging data but which we may be uninterested in from a clinical perspective. Such variables are commonly referred to as ‘confounds’. In this work, we firstly give a working definition for confound in the context of training predictive models from samples of neuroimaging data. We define a confound as a variable which affects the imaging data and has an association with the target variable in the sample that differs from that in the population-of-interest, i.e., the population over which we intend to apply the estimated predictive model. The focus of this paper is the scenario in which the confound and target variable are independent in the population-of-interest, but the training sample is biased due to a sample association between the target and confound. We then discuss standard approaches for dealing with confounds in predictive modelling such as image adjustment and including the confound as a predictor, before deriving and motivating an Instance Weighting scheme that attempts to account for confounds by focusing model training so that it is optimal for the population-of-interest. We evaluate the standard approaches and Instance Weighting in two regression problems with neuroimaging data in which we train models in the presence of confounding, and predict samples that are representative of the population-of-interest. For comparison, these models are also evaluated when there is no confounding present. In the first experiment we predict the MMSE score using structural MRI from the ADNI database with gender as the confound, while in the second we predict age using structural MRI from the IXI database with acquisition site as the confound. Considered over both datasets we find that none of the methods for dealing with confounding gives more accurate predictions than a baseline model which ignores confounding, although including the confound as a predictor gives models that are less accurate than the baseline model. We do find, however, that different methods appear to focus their predictions on specific subsets of the population-of-interest, and that predictive accuracy is greater when there is no confounding present. We conclude with a discussion comparing the advantages and disadvantages of each approach, and the implications of our evaluation for building predictive models that can be used in clinical practice.

No MeSH data available.


No of subjects by gender over unit intervals of  in the unbiased training sample  are shown in (a). The corresponding barchart for the biased training sample  is shown in (b).
© Copyright Policy - CC BY
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC5391990&req=5

f0060: No of subjects by gender over unit intervals of in the unbiased training sample are shown in (a). The corresponding barchart for the biased training sample is shown in (b).

Mentions: In the first experiment, we test whether bias in the training sample will affect the learning of the predictive model when the model is correctly specified. Firsly, we create an unbiased training sample consisting of 900 observations sampled ‘randomly’ from the non-test data according to(A.8)FTr1(f∈[j,j+1])=19FTr1(c/f∈[j,j+1])=12so that the training sample is representative of the population, with minimal correlation between and (Fig. A.12Fig. A.12


Predictive modelling using neuroimaging data in the presence of confounds
No of subjects by gender over unit intervals of  in the unbiased training sample  are shown in (a). The corresponding barchart for the biased training sample  is shown in (b).
© Copyright Policy - CC BY
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC5391990&req=5

f0060: No of subjects by gender over unit intervals of in the unbiased training sample are shown in (a). The corresponding barchart for the biased training sample is shown in (b).
Mentions: In the first experiment, we test whether bias in the training sample will affect the learning of the predictive model when the model is correctly specified. Firsly, we create an unbiased training sample consisting of 900 observations sampled ‘randomly’ from the non-test data according to(A.8)FTr1(f∈[j,j+1])=19FTr1(c/f∈[j,j+1])=12so that the training sample is representative of the population, with minimal correlation between and (Fig. A.12Fig. A.12

View Article: PubMed Central - PubMed

ABSTRACT

When training predictive models from neuroimaging data, we typically have available non-imaging variables such as age and gender that affect the imaging data but which we may be uninterested in from a clinical perspective. Such variables are commonly referred to as ‘confounds’. In this work, we firstly give a working definition for confound in the context of training predictive models from samples of neuroimaging data. We define a confound as a variable which affects the imaging data and has an association with the target variable in the sample that differs from that in the population-of-interest, i.e., the population over which we intend to apply the estimated predictive model. The focus of this paper is the scenario in which the confound and target variable are independent in the population-of-interest, but the training sample is biased due to a sample association between the target and confound. We then discuss standard approaches for dealing with confounds in predictive modelling such as image adjustment and including the confound as a predictor, before deriving and motivating an Instance Weighting scheme that attempts to account for confounds by focusing model training so that it is optimal for the population-of-interest. We evaluate the standard approaches and Instance Weighting in two regression problems with neuroimaging data in which we train models in the presence of confounding, and predict samples that are representative of the population-of-interest. For comparison, these models are also evaluated when there is no confounding present. In the first experiment we predict the MMSE score using structural MRI from the ADNI database with gender as the confound, while in the second we predict age using structural MRI from the IXI database with acquisition site as the confound. Considered over both datasets we find that none of the methods for dealing with confounding gives more accurate predictions than a baseline model which ignores confounding, although including the confound as a predictor gives models that are less accurate than the baseline model. We do find, however, that different methods appear to focus their predictions on specific subsets of the population-of-interest, and that predictive accuracy is greater when there is no confounding present. We conclude with a discussion comparing the advantages and disadvantages of each approach, and the implications of our evaluation for building predictive models that can be used in clinical practice.

No MeSH data available.