2019
DOI: 10.1007/s41468-019-00032-z
|View full text |Cite
|
Sign up to set email alerts
|

On the choice of weight functions for linear representations of persistence diagrams

Abstract: A. Persistence diagrams are efficient descriptors of the topology of a point cloud. As they do not naturally belong to a Hilbert space, standard statistical methods cannot be directly applied to them. Instead, feature maps (or representations) are commonly used for the analysis. A large class of feature maps, which we call linear, depends on some weight functions, the choice of which is a critical issue. An important criterion to choose a weight function is to ensure stability of the feature maps with respect … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
21
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
6
1

Relationship

1
6

Authors

Journals

citations
Cited by 22 publications
(21 citation statements)
references
References 40 publications
0
21
0
Order By: Relevance
“…First, we recall the definition of the filtration time r(σ):=inf{r>0:σ𝒦r} as the first time r > 0 where a simplex σ𝒦T is contained in the complex 𝒦r. In (Divol & Polonik, 2020, Lemma 6.10) it is shown that both in the Č‐ and the VR‐filtration, the filtration times of p2 iid uniform points in a ball admit a bounded and continuous density on +, which is bounded above by some c 1, d > 0.…”
Section: Proof Of Theorems 1 and 2—tightnessmentioning
confidence: 99%
“…First, we recall the definition of the filtration time r(σ):=inf{r>0:σ𝒦r} as the first time r > 0 where a simplex σ𝒦T is contained in the complex 𝒦r. In (Divol & Polonik, 2020, Lemma 6.10) it is shown that both in the Č‐ and the VR‐filtration, the filtration times of p2 iid uniform points in a ball admit a bounded and continuous density on +, which is bounded above by some c 1, d > 0.…”
Section: Proof Of Theorems 1 and 2—tightnessmentioning
confidence: 99%
“…Another approach to hypothesis testing is given by Robinson and Turner (2017). Other statistical approaches include Fasy et al (2014), which describes confidence intervals and a statistical approach to distinguishing important features from noise, Divol and Polonik (2019) and Maroulas et al (2019), which consider probability density functions for persistence diagrams, and Maroulas et al (2020), which describes a Bayesian framework. See Wasserman (2018) for a review of statistical techniques in the context of topological data analysis.…”
Section: Machine Learningmentioning
confidence: 99%
“…Other techniques for vectorizing persistence barcodes involve heat kernels (Carrière et al, 2015), entropy (Merelli et al, 2015;Atienza et al, 2020), rings of algebraic functions (Adcock et al, 2016), tropical coordinates (Kališnik, 2019), complex polynomials (Di Fabio andFerri, 2015), and optimal transport (Carrière et al, 2017), among others. Some of these techniques, including those by Zhao and Wang (2019) and Divol and Polonik (2019), allow one to learn the vectorization parameters that are best suited for a machine learning task on a given dataset. Others allow one to plug persistent homology information directly into a neural network (Hofer et al, 2017).…”
Section: Machine Learningmentioning
confidence: 99%
“…This loss of information can be detrimental to the analysis of the given dataset as important features may be over smoothed or unintentionally grouped together. One method of combatting this potential loss of information is to utilize weighting function w(ξ, τ) on the PI representation to emphasize certain features (Divol and Polonik, 2019;Adams et al, 2017). A popular method used is to assign linear or exponential weight such as w(ξ, τ) = (τ − ξ) for each point in the PD that increases in correlation with the distance from the diagonal of the persistence diagram before diffusion via the Gaussian kernel.…”
Section: Smoothing Pdmentioning
confidence: 99%