2018
DOI: 10.1016/j.ejor.2016.06.043
|View full text |Cite
|
Sign up to set email alerts
|

Graph algorithms for DNA sequencing – origins, current models and the future

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
5
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
5
3
1

Relationship

2
7

Authors

Journals

citations
Cited by 18 publications
(5 citation statements)
references
References 52 publications
0
5
0
Order By: Relevance
“…Finally, they highlight the key developments in sequencing and provide predictions about how these may affect computational models in the future. [20]. Graph algorithms for DNA sequencing-origins, current models and the future • Graph algorithms for DNA sequencing…”
Section: Literature Reviewmentioning
confidence: 99%
“…Finally, they highlight the key developments in sequencing and provide predictions about how these may affect computational models in the future. [20]. Graph algorithms for DNA sequencing-origins, current models and the future • Graph algorithms for DNA sequencing…”
Section: Literature Reviewmentioning
confidence: 99%
“…The main features that distinguish NGS from Sangers sequencing are highly parallel, micro scale, fast, shorter lengths and low-cost 10 . Their technique substantially reduces the cost of producing short DNA reads from 50 to 700 bp, and has opened up the possibility for an affordable sequencing of whole genomes 11 .…”
Section: Introductionmentioning
confidence: 99%
“…The overlap graph model is a straightforward conceptualization of the real-world process and works well as long as the sequencing data are not too large. The necessity of representing whole sequences in the graph and calculating inexact sequence alignments for most of pairs of the sequences makes the computational process very time and memory consuming ( Blazewicz et al , 2018 ). The literature reports cases where assemblers from this group did not finish computations for greater datasets because of excessive memory requirements ( Gonnella and Kurtz, 2012 ; Kajitani et al , 2014 ).…”
Section: Introductionmentioning
confidence: 99%
“…A gain in efficiency of computations is achieved by a much lower volume of stored information and a smaller traversed graph, but mainly by discarding inexact matches. On the other hand, quality of resulting contigs diminishes a bit due to the sequence decomposition to k -mers, as the information about whole reads is partially lost ( Blazewicz et al , 2018 ).…”
Section: Introductionmentioning
confidence: 99%