Limits...
Validation and selection of ODE based systems biology models: how to arrive at more reliable decisions.

Hasdemir D, Hoefsloot HC, Smilde AK - BMC Syst Biol (2015)

Bottom Line: However, drawbacks associated with this approach are usually under-estimated.The hold-out validation strategy leads to biased conclusions, since it can lead to different validation and selection decisions when different partitioning schemes are used.Therefore, it proves to be a promising alternative to the standard hold-out validation strategy.

View Article: PubMed Central - PubMed

Affiliation: Biosystems Data Analysis Group, Swammerdam Institute for Life Sciences, University of Amsterdam, Amsterdam, The Netherlands. D.Hasdemir@uva.nl.

ABSTRACT

Background: Most ordinary differential equation (ODE) based modeling studies in systems biology involve a hold-out validation step for model validation. In this framework a pre-determined part of the data is used as validation data and, therefore it is not used for estimating the parameters of the model. The model is assumed to be validated if the model predictions on the validation dataset show good agreement with the data. Model selection between alternative model structures can also be performed in the same setting, based on the predictive power of the model structures on the validation dataset. However, drawbacks associated with this approach are usually under-estimated.

Results: We have carried out simulations by using a recently published High Osmolarity Glycerol (HOG) pathway from S.cerevisiae to demonstrate these drawbacks. We have shown that it is very important how the data is partitioned and which part of the data is used for validation purposes. The hold-out validation strategy leads to biased conclusions, since it can lead to different validation and selection decisions when different partitioning schemes are used. Furthermore, finding sensible partitioning schemes that would lead to reliable decisions are heavily dependent on the biology and unknown model parameters which turns the problem into a paradox. This brings the need for alternative validation approaches that offer flexible partitioning of the data. For this purpose, we have introduced a stratified random cross-validation (SRCV) approach that successfully overcomes these limitations.

Conclusions: SRCV leads to more stable decisions for both validation and selection which are not biased by underlying biological phenomena. Furthermore, it is less dependent on the specific noise realization in the data. Therefore, it proves to be a promising alternative to the standard hold-out validation strategy.

No MeSH data available.


Related in: MedlinePlus

Percentage prediction errors (PE) of the true model structure in scenario 1. Each box plot shows the distribution of PE over 100 different realizations of the data. The red dots indicate the outliers which lie outside approximately 99.3 % coverage if the data is normally distributed. They indicate realizations with relatively higher PE. Blue, green and black boxes refer to Sln1, Sho1, and WT schemes. Each row in the figure corresponds to a single scheme. The labels on the x-axis show the specific dose and the cell type of the data on which the validation was performed. The labels indicate also the medians of the PE distribution summarized visually by the box plots. In each graph, the ten realizations with the highest PE are located above the black dashed line. The region above this line is compressed for visual ease. a PE obtained on Sho1 validation subsets in the Sln1 scheme. b PE obtained on WT validation subsets in the Sln1 scheme. c PE obtained on Sln1 validation subsets in the Sho1 scheme. d PE obtained on WT validation subsets in the Sho1 scheme. e PE obtained on Sln1 validation subsets in the WT scheme. f PE obtained on Sho1 validation subsets in the WT scheme
© Copyright Policy - open-access
Related In: Results  -  Collection

License 1 - License 2
getmorefigures.php?uid=PMC4493957&req=5

Fig9: Percentage prediction errors (PE) of the true model structure in scenario 1. Each box plot shows the distribution of PE over 100 different realizations of the data. The red dots indicate the outliers which lie outside approximately 99.3 % coverage if the data is normally distributed. They indicate realizations with relatively higher PE. Blue, green and black boxes refer to Sln1, Sho1, and WT schemes. Each row in the figure corresponds to a single scheme. The labels on the x-axis show the specific dose and the cell type of the data on which the validation was performed. The labels indicate also the medians of the PE distribution summarized visually by the box plots. In each graph, the ten realizations with the highest PE are located above the black dashed line. The region above this line is compressed for visual ease. a PE obtained on Sho1 validation subsets in the Sln1 scheme. b PE obtained on WT validation subsets in the Sln1 scheme. c PE obtained on Sln1 validation subsets in the Sho1 scheme. d PE obtained on WT validation subsets in the Sho1 scheme. e PE obtained on Sln1 validation subsets in the WT scheme. f PE obtained on Sho1 validation subsets in the WT scheme

Mentions: When only data from the Sln1 branch active deletion mutant (Sln1 data) is used for parameter estimation, validation using the Sho1 branch active deletion mutant data (Sho1 data) can be very misleading. This is because models trained by using Sln1 data results in bad predictions on the Sho1 data. On the other hand, the same models can achieve reasonable predictions on the WT data (See Fig. 8 for an example). This can be seen from the distribution of the percentage prediction errors represented by box plots for each validation set in Fig. 9a and b.Fig. 9


Validation and selection of ODE based systems biology models: how to arrive at more reliable decisions.

Hasdemir D, Hoefsloot HC, Smilde AK - BMC Syst Biol (2015)

Percentage prediction errors (PE) of the true model structure in scenario 1. Each box plot shows the distribution of PE over 100 different realizations of the data. The red dots indicate the outliers which lie outside approximately 99.3 % coverage if the data is normally distributed. They indicate realizations with relatively higher PE. Blue, green and black boxes refer to Sln1, Sho1, and WT schemes. Each row in the figure corresponds to a single scheme. The labels on the x-axis show the specific dose and the cell type of the data on which the validation was performed. The labels indicate also the medians of the PE distribution summarized visually by the box plots. In each graph, the ten realizations with the highest PE are located above the black dashed line. The region above this line is compressed for visual ease. a PE obtained on Sho1 validation subsets in the Sln1 scheme. b PE obtained on WT validation subsets in the Sln1 scheme. c PE obtained on Sln1 validation subsets in the Sho1 scheme. d PE obtained on WT validation subsets in the Sho1 scheme. e PE obtained on Sln1 validation subsets in the WT scheme. f PE obtained on Sho1 validation subsets in the WT scheme
© Copyright Policy - open-access
Related In: Results  -  Collection

License 1 - License 2
Show All Figures
getmorefigures.php?uid=PMC4493957&req=5

Fig9: Percentage prediction errors (PE) of the true model structure in scenario 1. Each box plot shows the distribution of PE over 100 different realizations of the data. The red dots indicate the outliers which lie outside approximately 99.3 % coverage if the data is normally distributed. They indicate realizations with relatively higher PE. Blue, green and black boxes refer to Sln1, Sho1, and WT schemes. Each row in the figure corresponds to a single scheme. The labels on the x-axis show the specific dose and the cell type of the data on which the validation was performed. The labels indicate also the medians of the PE distribution summarized visually by the box plots. In each graph, the ten realizations with the highest PE are located above the black dashed line. The region above this line is compressed for visual ease. a PE obtained on Sho1 validation subsets in the Sln1 scheme. b PE obtained on WT validation subsets in the Sln1 scheme. c PE obtained on Sln1 validation subsets in the Sho1 scheme. d PE obtained on WT validation subsets in the Sho1 scheme. e PE obtained on Sln1 validation subsets in the WT scheme. f PE obtained on Sho1 validation subsets in the WT scheme
Mentions: When only data from the Sln1 branch active deletion mutant (Sln1 data) is used for parameter estimation, validation using the Sho1 branch active deletion mutant data (Sho1 data) can be very misleading. This is because models trained by using Sln1 data results in bad predictions on the Sho1 data. On the other hand, the same models can achieve reasonable predictions on the WT data (See Fig. 8 for an example). This can be seen from the distribution of the percentage prediction errors represented by box plots for each validation set in Fig. 9a and b.Fig. 9

Bottom Line: However, drawbacks associated with this approach are usually under-estimated.The hold-out validation strategy leads to biased conclusions, since it can lead to different validation and selection decisions when different partitioning schemes are used.Therefore, it proves to be a promising alternative to the standard hold-out validation strategy.

View Article: PubMed Central - PubMed

Affiliation: Biosystems Data Analysis Group, Swammerdam Institute for Life Sciences, University of Amsterdam, Amsterdam, The Netherlands. D.Hasdemir@uva.nl.

ABSTRACT

Background: Most ordinary differential equation (ODE) based modeling studies in systems biology involve a hold-out validation step for model validation. In this framework a pre-determined part of the data is used as validation data and, therefore it is not used for estimating the parameters of the model. The model is assumed to be validated if the model predictions on the validation dataset show good agreement with the data. Model selection between alternative model structures can also be performed in the same setting, based on the predictive power of the model structures on the validation dataset. However, drawbacks associated with this approach are usually under-estimated.

Results: We have carried out simulations by using a recently published High Osmolarity Glycerol (HOG) pathway from S.cerevisiae to demonstrate these drawbacks. We have shown that it is very important how the data is partitioned and which part of the data is used for validation purposes. The hold-out validation strategy leads to biased conclusions, since it can lead to different validation and selection decisions when different partitioning schemes are used. Furthermore, finding sensible partitioning schemes that would lead to reliable decisions are heavily dependent on the biology and unknown model parameters which turns the problem into a paradox. This brings the need for alternative validation approaches that offer flexible partitioning of the data. For this purpose, we have introduced a stratified random cross-validation (SRCV) approach that successfully overcomes these limitations.

Conclusions: SRCV leads to more stable decisions for both validation and selection which are not biased by underlying biological phenomena. Furthermore, it is less dependent on the specific noise realization in the data. Therefore, it proves to be a promising alternative to the standard hold-out validation strategy.

No MeSH data available.


Related in: MedlinePlus