Back to Search Start Over

Hadoop and PySpark for reproducibility and scalability of genomic sequencing studies.

Authors :
Wheeler NR
Benchek P
Kunkle BW
Hamilton-Nelson KL
Warfe M
Fondran JR
Haines JL
Bush WS
Source :
Pacific Symposium on Biocomputing. Pacific Symposium on Biocomputing [Pac Symp Biocomput] 2020; Vol. 25, pp. 523-534.
Publication Year :
2020

Abstract

Modern genomic studies are rapidly growing in scale, and the analytical approaches used to analyze genomic data are increasing in complexity. Genomic data management poses logistic and computational challenges, and analyses are increasingly reliant on genomic annotation resources that create their own data management and versioning issues. As a result, genomic datasets are increasingly handled in ways that limit the rigor and reproducibility of many analyses. In this work, we examine the use of the Spark infrastructure for the management, access, and analysis of genomic data in comparison to traditional genomic workflows on typical cluster environments. We validate the framework by reproducing previously published results from the Alzheimer's Disease Sequencing Project. Using the framework and analyses designed using Jupyter notebooks, Spark provides improved workflows, reduces user-driven data partitioning, and enhances the portability and reproducibility of distributed analyses required for large-scale genomic studies.

Details

Language :
English
ISSN :
2335-6936
Volume :
25
Database :
MEDLINE
Journal :
Pacific Symposium on Biocomputing. Pacific Symposium on Biocomputing
Publication Type :
Academic Journal
Accession number :
31797624