Data from: Prevention, diagnosis, and treatment of high-throughput sequencing data pathologies
Cite this dataset
Zhou, Xiaofan; Rokas, Antonis (2014). Data from: Prevention, diagnosis, and treatment of high-throughput sequencing data pathologies [Dataset]. Dryad. https://doi.org/10.5061/dryad.h988s
High Throughput Sequencing (HTS) technologies generate millions of sequence reads from DNA/RNA molecules rapidly and cost-effectively, enabling single investigator laboratories to address a variety of “omics” questions in non-model organisms, fundamentally changing the way genomic approaches are used to advance biological research. One major challenge posed by HTS is the complexity and difficulty of data quality control (QC). While QC issues associated with sample isolation, library preparation, and sequencing are well known and protocols for their handling are widely available, the QC of the actual sequence reads generated by HTS is often overlooked. HTS-generated sequence reads can contain various errors, biases, and artefacts whose identification and amelioration can greatly impact subsequent data analysis. However, a systematic survey on QC procedures for HTS data is still lacking. In this review, we begin by presenting standard “health check-up” QC procedures recommended for HTS datasets and establishing what “healthy” HTS data look like. We next proceed by classifying errors, biases and artifacts present in HTS data into three major types of “pathologies”, discussing their causes and symptoms, and illustrating with examples their diagnosis and impact on downstream analyses. We conclude this review by offering examples of successful “treatment” protocols and recommendations on standard practices and treatment options. Notwithstanding the speed with which HTS technologies–and consequently their pathologies–change, we argue that careful QC of HTS data is an important–yet often neglected–aspect of their application in molecular ecology, and lay the groundwork for developing a HTS data QC “best practices” guide.