2022
DOI: 10.1101/2022.02.21.481353
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Truvari: Refined Structural Variant Comparison Preserves Allelic Diversity

Abstract: For multi-sample structural variant analyses like merging, benchmarking, and annotation, the fundamental operation is to identify when two SVs are the same. Commonly applied approaches for comparing SVs were developed alongside technologies which produce ill-defined boundaries. As SV detection becomes more exact, algorithms to preserve this refined signal are needed. Here we present Truvari - a SV comparison, annotation and analysis toolkit - and demonstrate the effect of SV comparison choices by building popu… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
36
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
3
1

Relationship

3
5

Authors

Journals

citations
Cited by 29 publications
(36 citation statements)
references
References 40 publications
(48 reference statements)
0
36
0
Order By: Relevance
“…Using the same PacBio-HiFi dataset, we called structural variants (SVs) using Sniffles2 36 and analyzed the results using truvari 37 (Section 4.5.2). We evaluated the SV calls using the GIAB Tier 1 benchmark regions for GRCh37 28 and the GIAB CMRG benchmark for GRCh38 26 .…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…Using the same PacBio-HiFi dataset, we called structural variants (SVs) using Sniffles2 36 and analyzed the results using truvari 37 (Section 4.5.2). We evaluated the SV calls using the GIAB Tier 1 benchmark regions for GRCh37 28 and the GIAB CMRG benchmark for GRCh38 26 .…”
Section: Resultsmentioning
confidence: 99%
“…We next compressed the VCF files for each dataset using bgzip and indexed them with tabix 49 . Finally, we benchmarked and compared the SV calls using the GIAB Tier 1 benchmark regions for GRCh37 28 and the GIAB CMRG benchmark for GRCh38 26 using truvari 2.1 37 and following the GIAB benchmarking instructions.…”
Section: Methodsmentioning
confidence: 99%
“…Recall and precision were calculated within the refined Dipcall confident regions (Methods) and then stratified by the GIAB v3.0 genomic context. To evaluate the SV (!50 bp) calling performance, the autosomal SVs from a given pangenome graph (query set) were compared to the consensus SV call set (truth set) for each individual using truvari bench (v3.2.0, (English et al, 2022)) with options --multimatch -r 1000 -C 1000 -O 0.0 -p 0.0 -P 0.3 -s 50 -S 15 --sizemax 100000 --includebed <Dipcall confident regions>. Recall and precision were then stratified by the GIAB v3 genomic context and by variant length.…”
Section: Benchmarking Variantsmentioning
confidence: 99%
“…We assessed the performance of Sniffles2 with respect to Sniffles 27 (v1.12), cuteSV 45 (v1.0.11), PBSV 46 (v2.6.2) and SVIM 47 (v1.4.2) using Truvari 48 and the GIAB recommended parameters 49 . Figure2 shows the results across different GIAB benchmarks.…”
Section: Resultsmentioning
confidence: 99%
“…We used Truvari 48 (version 2.1) for benchmarking the accuracy of all SV callers across datasets. For benchmarking, we used the -- passonly parameter to include only those SVs from caller and gold standard that are not marked as filtered.…”
Section: Methodsmentioning
confidence: 99%