SOAPdenovo2: an empirically improved memory-efficient short-read de novo assembler

R Luo, B Liu, Y Xie, Z Li, W Huang, J Yuan, G He… - …, 2012 - academic.oup.com
R Luo, B Liu, Y Xie, Z Li, W Huang, J Yuan, G He, Y Chen, Q Pan, Y Liu, J Tang, G Wu…
Gigascience, 2012academic.oup.com
Background There is a rapidly increasing amount of de novo genome assembly using next-
generation sequencing (NGS) short reads; however, several big challenges remain to be
overcome in order for this to be efficient and accurate. SOAPdenovo has been successfully
applied to assemble many published genomes, but it still needs improvement in continuity,
accuracy and coverage, especially in repeat regions. Findings To overcome these
challenges, we have developed its successor, SOAPdenovo2, which has the advantage of a …
Background
There is a rapidly increasing amount of de novo genome assembly using next-generation sequencing (NGS) short reads; however, several big challenges remain to be overcome in order for this to be efficient and accurate. SOAPdenovo has been successfully applied to assemble many published genomes, but it still needs improvement in continuity, accuracy and coverage, especially in repeat regions.
Findings
To overcome these challenges, we have developed its successor, SOAPdenovo2, which has the advantage of a new algorithm design that reduces memory consumption in graph construction, resolves more repeat regions in contig assembly, increases coverage and length in scaffold construction, improves gap closing, and optimizes for large genome.
Conclusions
Benchmark using the Assemblathon1 and GAGE datasets showed that SOAPdenovo2 greatly surpasses its predecessor SOAPdenovo and is competitive to other assemblers on both assembly length and accuracy. We also provide an updated assembly version of the 2008 Asian (YH) genome using SOAPdenovo2. Here, the contig and scaffold N50 of the YH genome were ∼20.9 kbp and ∼22 Mbp, respectively, which is 3-fold and 50-fold longer than the first published version. The genome coverage increased from 81.16% to 93.91%, and memory consumption was ∼2/3 lower during the point of largest memory consumption.
Oxford University Press