2018
DOI: 10.48550/arxiv.1803.02739
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Nonparametric Estimation of Probability Density Functions of Random Persistence Diagrams

Abstract: We introduce a nonparametric way to estimate the global probability density function for a random persistence diagram. Precisely, a kernel density function centered at a given persistence diagram and a given bandwidth is constructed. Our approach encapsulates the number of topological features and considers the appearance or disappearance of features near the diagonal in a stable fashion. In particular, the structure of our kernel individually tracks long persistence features, while considering features near t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
4
0

Year Published

2018
2018
2020
2020

Publication Types

Select...
2
1

Relationship

2
1

Authors

Journals

citations
Cited by 3 publications
(4 citation statements)
references
References 27 publications
(51 reference statements)
0
4
0
Order By: Relevance
“…There are also many other ways to map persistence diagrams to a vector space or Hilbert space. These include the Euler characteristic curve [62], the persistence scale-space map [56], complex vectors [33], pairwise distances [21], silhouettes [25], the longest bars [6], the rank function [58], the affine coordinate ring [2], the persistence weighted Gaussian kernel [44], topological pooling [12], the Hilbert sphere [5], persistence images [1], replicating statistical topology [3], tropical rational functions [42], death vectors [53], persistence intensity functions [26], kernel density estimates [55,50], the sliced Wasserstein kernel [20], the smooth Euler characteristic transform [32], the accumulated persistence function [9], the persistence Fisher kernel [45], persistence paths [27], and persistence contours [57]. Perhaps since the persistence diagram is such a rich invariant, it seems that any reasonable way of encoding it in a vector works fairly well.…”
Section: Related Workmentioning
confidence: 99%
“…There are also many other ways to map persistence diagrams to a vector space or Hilbert space. These include the Euler characteristic curve [62], the persistence scale-space map [56], complex vectors [33], pairwise distances [21], silhouettes [25], the longest bars [6], the rank function [58], the affine coordinate ring [2], the persistence weighted Gaussian kernel [44], topological pooling [12], the Hilbert sphere [5], persistence images [1], replicating statistical topology [3], tropical rational functions [42], death vectors [53], persistence intensity functions [26], kernel density estimates [55,50], the sliced Wasserstein kernel [20], the smooth Euler characteristic transform [32], the accumulated persistence function [9], the persistence Fisher kernel [45], persistence paths [27], and persistence contours [57]. Perhaps since the persistence diagram is such a rich invariant, it seems that any reasonable way of encoding it in a vector works fairly well.…”
Section: Related Workmentioning
confidence: 99%
“…Researchers desire to utilize persistence diagrams for inference and classification problems. Several achieve this directly with persistence diagrams [6,9,19,35,40,41,49], while others elect to first map them into a Hilbert space [1,8,18,48,57]. The latter approach enables one to adopt traditional machine learning and statistical tools such as principal component analysis, random forests, support vector machines, and more general kernel-based learning schemes.…”
Section: Introductionmentioning
confidence: 99%
“…The homological features in persistence diagrams have no intrinsic order implying they are random sets as opposed to random vectors. This viewpoint is embraced in [40] to construct a kernel density estimator for persistence diagrams. This kernel density estimator gives a sensible way to obtain priors for distributions of persistence diagrams; however, computing posteriors entirely through the random set analog of Bayes' rule is computationally intractable in general settings [23].…”
Section: Introductionmentioning
confidence: 99%
“…TDA provides input features for machine learning algorithms, as well as a useful toolbox for classification. Several authors have used TDA on real-world problems, see [4,12,24,26,27,28,38,41] and the references therein. Persistent homology, which measures changes in topological features over different scales, is the main framework considered by these authors.…”
mentioning
confidence: 99%