Limits...
NetBenchmark: a bioconductor package for reproducible benchmarks of gene regulatory network inference.

Bellot P, Olsen C, Salembier P, Oliveras-Vergés A, Meyer PE - BMC Bioinformatics (2015)

Bottom Line: Our open-source and freely available Bioconductor package aggregates a large set of tools to assess the robustness of network inference algorithms against different simulators, topologies, sample sizes and noise intensities.The benchmarking framework that uses various datasets highlights the specialization of some methods toward network types and data.As a result, it is possible to identify the techniques that have broad overall performances.

View Article: PubMed Central - PubMed

Affiliation: Universitat Politecnica de Catalunya BarcelonaTECH, Department of Signal Theory and Communications, UPC-Campus Nord, C/ Jordi Girona, 1-3, Barcelona, 08034, Spain. pau.bellot@upc.edu.

ABSTRACT

Background: In the last decade, a great number of methods for reconstructing gene regulatory networks from expression data have been proposed. However, very few tools and datasets allow to evaluate accurately and reproducibly those methods. Hence, we propose here a new tool, able to perform a systematic, yet fully reproducible, evaluation of transcriptional network inference methods.

Results: Our open-source and freely available Bioconductor package aggregates a large set of tools to assess the robustness of network inference algorithms against different simulators, topologies, sample sizes and noise intensities.

Conclusions: The benchmarking framework that uses various datasets highlights the specialization of some methods toward network types and data. As a result, it is possible to identify the techniques that have broad overall performances.

No MeSH data available.


Plots of performance with different noise intensities. Each line represents a method (color coded), the mean performance over the ten runs is presented
© Copyright Policy - OpenAccess
Related In: Results  -  Collection

License 1 - License 2
getmorefigures.php?uid=PMC4587916&req=5

Fig3: Plots of performance with different noise intensities. Each line represents a method (color coded), the mean performance over the ten runs is presented

Mentions: Here we present a procedure in order to test the stability of the different algorithms in the presence of local Gaussian noise. To do so, we use all datasources in Table 2 increasing gradually the local noise intensity (increasing κ value of σn;κ%), therefore decreasing the SNR. In this study we also use subsampled datasources of 150 experiments in order to derive the effect of noise on the various GRN reconstruction methods and being able to compare them with the results obtained at the previous study. In Table 6 we present the mean values of the AUPR in an undirected evaluation on the top 20 % of the total possible connections at each dataset. For each σn;κ% value, we perform ten different trials and the performance metrics (AUPR20 %) are the average of the different trials. In Fig. 3 the results of the datasources that have around 1000 genes are presented.Fig. 3


NetBenchmark: a bioconductor package for reproducible benchmarks of gene regulatory network inference.

Bellot P, Olsen C, Salembier P, Oliveras-Vergés A, Meyer PE - BMC Bioinformatics (2015)

Plots of performance with different noise intensities. Each line represents a method (color coded), the mean performance over the ten runs is presented
© Copyright Policy - OpenAccess
Related In: Results  -  Collection

License 1 - License 2
Show All Figures
getmorefigures.php?uid=PMC4587916&req=5

Fig3: Plots of performance with different noise intensities. Each line represents a method (color coded), the mean performance over the ten runs is presented
Mentions: Here we present a procedure in order to test the stability of the different algorithms in the presence of local Gaussian noise. To do so, we use all datasources in Table 2 increasing gradually the local noise intensity (increasing κ value of σn;κ%), therefore decreasing the SNR. In this study we also use subsampled datasources of 150 experiments in order to derive the effect of noise on the various GRN reconstruction methods and being able to compare them with the results obtained at the previous study. In Table 6 we present the mean values of the AUPR in an undirected evaluation on the top 20 % of the total possible connections at each dataset. For each σn;κ% value, we perform ten different trials and the performance metrics (AUPR20 %) are the average of the different trials. In Fig. 3 the results of the datasources that have around 1000 genes are presented.Fig. 3

Bottom Line: Our open-source and freely available Bioconductor package aggregates a large set of tools to assess the robustness of network inference algorithms against different simulators, topologies, sample sizes and noise intensities.The benchmarking framework that uses various datasets highlights the specialization of some methods toward network types and data.As a result, it is possible to identify the techniques that have broad overall performances.

View Article: PubMed Central - PubMed

Affiliation: Universitat Politecnica de Catalunya BarcelonaTECH, Department of Signal Theory and Communications, UPC-Campus Nord, C/ Jordi Girona, 1-3, Barcelona, 08034, Spain. pau.bellot@upc.edu.

ABSTRACT

Background: In the last decade, a great number of methods for reconstructing gene regulatory networks from expression data have been proposed. However, very few tools and datasets allow to evaluate accurately and reproducibly those methods. Hence, we propose here a new tool, able to perform a systematic, yet fully reproducible, evaluation of transcriptional network inference methods.

Results: Our open-source and freely available Bioconductor package aggregates a large set of tools to assess the robustness of network inference algorithms against different simulators, topologies, sample sizes and noise intensities.

Conclusions: The benchmarking framework that uses various datasets highlights the specialization of some methods toward network types and data. As a result, it is possible to identify the techniques that have broad overall performances.

No MeSH data available.