Back to Search
Start Over
Increasing calling accuracy, coverage, and read depth in sequence data by the use of haplotype blocks
- Publication Year :
- 2021
- Publisher :
- Cold Spring Harbor Laboratory, 2021.
-
Abstract
- High-throughput genotyping of large numbers of lines remains a key challenge in plant genetics, requiring geneticists and breeders to find a balance between data quality and the number of genotyped lines under a variety of different existing technologies when resources are limited. In this work, we are proposing a new imputation pipeline (“HBimpute”) that can be used to generate high-quality genomic data from low read-depth whole-genome-sequence data. The key idea of the pipeline is the use of haplotype blocks from the software HaploBlocker to identify locally similar lines and merge their reads locally. The effectiveness of the pipeline is showcased on a dataset of 321 doubled haploid lines of a European maize landrace, which were sequenced with 0.5X read-depth. Overall imputing error rates are cut in half compared to the state-of-the-art software BEAGLE, while the average read-depth is increased to 83X, thus enabling the calling of structural variation. The usefulness of the obtained imputed data panel is further evaluated by comparing the performance in common breeding applications to that of genomic data from a 600k array. In particular for genome-wide association studies, the sequence data is shown to be performing slightly better. Furthermore, genomic prediction based on the overlapping markers from the array and sequence is leading to a slightly higher predictive ability for the imputed sequence data, thereby indicating that the data quality obtained from low read-depth sequencing is on par or even slightly higher than high-density array data. When including all markers for the sequence data, the predictive ability is slightly reduced indicating overall lower data quality in non-array markers.Author summaryHigh-throughput genotyping of large numbers of lines remains a key challenge in plant genetics and breeding. Cost, precision, and throughput must be balanced to achieve optimal efficiencies given available technologies and finite resources. Although genotyping arrays are still considered the gold standard in high-throughput quantitative genetics, recent advances in sequencing provide new opportunities for this. Both the quality and cost of genomic data generated based on sequencing are highly dependent on the used read depth. In this work, we are proposing a new imputation pipeline (“HBimpute”) that uses haplotype blocks to detect individuals of the same genetic origin and subsequently uses all reads of those individuals in the variant calling. Thus, the obtained virtual read depth is artificially increased, leading to higher calling accuracy, coverage, and the ability to all copy number variation based on relatively cheap low-read depth sequencing data. Thus, our approach makes sequencing a cost-competitive alternative to genotyping arrays with the additional benefit of the potential use of structural variation.
- Subjects :
- 0106 biological sciences
2. Zero hunger
0303 health sciences
Computer science
Pipeline (computing)
Quantitative genetics
computer.software_genre
01 natural sciences
Structural variation
03 medical and health sciences
Data quality
Imputation (statistics)
Copy-number variation
Data mining
Genotyping
computer
Imputation (genetics)
030304 developmental biology
010606 plant biology & botany
Genetic association
Subjects
Details
- Database :
- OpenAIRE
- Accession number :
- edsair.doi...........a726e26eda37cda18246c83b254c3ef6