2020
DOI: 10.48550/arxiv.2006.09194
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Finding the Homology of Manifolds using Ellipsoids

Sara Kalisnik,
Davorin Lesnik

Abstract: A standard problem in applied topology is how to discover topological invariants of data from a noisy point cloud that approximates it. We consider the case where a sample is drawn from a properly embedded C 1 -submanifold without boundary in a Euclidean space. We show that we can deformation retract the union of ellipsoids, centered at sample points and stretching in the tangent directions, to the manifold. Hence the homotopy type, and therefore also the homology type, of the manifold is the same as that of t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2021
2021
2021
2021

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 15 publications
0
1
0
Order By: Relevance
“…One idea for improving on the current definition is to consider the local covariance of the probability distribution at each point [35] and to modify the Riemannian metric in such a way that with respect to the new Riemannian metric, the local covariance matrix at each point is the identity matrix (with respect to a positively oriented orthonormal basis). This idea is akin to the usual normalization that data scientists often do in Euclidean space, and it is also reminiscent of the ellipsoidthickenings of [26]. In the k-nearest neighbors filtration, each point is connected to its k-nearest neighbors at the kth filtration step.…”
Section: Conclusion and Discussionmentioning
confidence: 99%
“…One idea for improving on the current definition is to consider the local covariance of the probability distribution at each point [35] and to modify the Riemannian metric in such a way that with respect to the new Riemannian metric, the local covariance matrix at each point is the identity matrix (with respect to a positively oriented orthonormal basis). This idea is akin to the usual normalization that data scientists often do in Euclidean space, and it is also reminiscent of the ellipsoidthickenings of [26]. In the k-nearest neighbors filtration, each point is connected to its k-nearest neighbors at the kth filtration step.…”
Section: Conclusion and Discussionmentioning
confidence: 99%