1. NCBench: providing an open, reproducible, transparent, adaptable, and continuous benchmark approach for DNA-sequencing-based variant calling [version 2; peer review: 2 approved]
- Author
-
Felix Wiegand, Bianca Stöcker, Famke Bäuerle, Susanne Motameny, Andreas Buness, Alexander J. Probst, Fabian Brand, Axel Schmidt, Tyll Stöcker, Sugirthan Sivalingam, Andreas Petzold, Marc Sturm, Janine Altmueller, Johannes Köster, Kerstin Becker, Leon Brandhoff, Anna Ossowski, Christian Mertes, Avirup Guha Neogi, Gisela Gabernet, Nicholas H. Smith, and Friederike Hanssen
- Subjects
continuous ,benchmarking ,NGS ,variant calling ,eng ,Medicine ,Science - Abstract
We present the results of the human genomic small variant calling benchmarking initiative of the German Research Foundation (DFG) funded Next Generation Sequencing Competence Network (NGS-CN) and the German Human Genome-Phenome Archive (GHGA). In this effort, we developed NCBench, a continuous benchmarking platform for the evaluation of small genomic variant callsets in terms of recall, precision, and false positive/negative error patterns. NCBench is implemented as a continuously re-evaluated open-source repository. We show that it is possible to entirely rely on public free infrastructure (Github, Github Actions, Zenodo) in combination with established open-source tools. NCBench is agnostic of the used dataset and can evaluate an arbitrary number of given callsets, while reporting the results in a visual and interactive way. We used NCBench to evaluate over 40 callsets generated by various variant calling pipelines available in the participating groups that were run on three exome datasets from different enrichment kits and at different coverages. While all pipelines achieve high overall quality, subtle systematic differences between callers and datasets exist and are made apparent by NCBench.These insights are useful to improve existing pipelines and develop new workflows. NCBench is meant to be open for the contribution of any given callset. Most importantly, for authors, it will enable the omission of repeated re-implementation of paper-specific variant calling benchmarks for the publication of new tools or pipelines, while readers will benefit from being able to (continuously) observe the performance of tools and pipelines at the time of reading instead of at the time of writing.
- Published
- 2024
- Full Text
- View/download PDF