2014
DOI: 10.1007/978-3-319-04099-8_7
|View full text |Cite
|
Sign up to set email alerts
|

Clear and Compress: Computing Persistent Homology in Chunks

Abstract: We present a parallelizable algorithm for computing the persistent homology of a filtered chain complex. Our approach differs from the commonly used reduction algorithm by first computing persistence pairs within local chunks, then simplifying the unpaired columns, and finally applying standard reduction on the simplified matrix. The approach generalizes a technique by Günther et al., which uses discrete Morse Theory to compute persistence; we derive the same worst-case complexity bound in a more general conte… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3

Citation Types

1
107
0

Year Published

2014
2014
2022
2022

Publication Types

Select...
4
4
1

Relationship

1
8

Authors

Journals

citations
Cited by 74 publications
(108 citation statements)
references
References 15 publications
1
107
0
Order By: Relevance
“…These methods focus on shrinking the input to the persistence algorithm. Another line of work attempts to speed up the computation of the algorithm directly using discrete Morse theory or related reductions as in the work of Mischaikow and Nanda [33] and Bauer, Kerber and Reininghaus [6].…”
Section: Introductionmentioning
confidence: 99%
“…These methods focus on shrinking the input to the persistence algorithm. Another line of work attempts to speed up the computation of the algorithm directly using discrete Morse theory or related reductions as in the work of Mischaikow and Nanda [33] and Bauer, Kerber and Reininghaus [6].…”
Section: Introductionmentioning
confidence: 99%
“…The algorithm is naturally parallel: blocks within a diagonal can be processed independently. Bauer, Kerber, and Reininghaus [20,21] have examined the practical aspects of this algorithm. Notably they found clever ways to combine seemingly incompatible optimizations and implemented the algorithm, both in shared and distributed memory.…”
Section: Introductionmentioning
confidence: 98%
“…Another notable result is the clearing optimization [19], which zeroes out entire columns of the matrix without processing them. It's also possible to combine the two optimizations together [20], although doing so requires a different algorithm.…”
Section: Introductionmentioning
confidence: 99%
“…Most methods use Gaussian elimination for this reduction and therefore incur a cubic worst-case time complexity in the number n of simplex insertions [20]. Nevertheless, in practice they are observed to behave near linearly in n on typical data 1 . The most optimized ones among them [1,3] are able to process millions of simplex insertions per second on a recent machine, which is considered fast enough for many practical purposes.…”
mentioning
confidence: 99%