2020
DOI: 10.1007/978-3-030-58565-5_1
|View full text |Cite
|
Sign up to set email alerts
|

The Average Mixing Kernel Signature

Abstract: We introduce the Average Mixing Kernel Signature (AMKS), a novel signature for points on non-rigid three-dimensional shapes based on the average mixing kernel and continuous-time quantum walks. The average mixing kernel holds information on the average transition probabilities of a quantum walk between each pair of vertices of the mesh until a time T . We define the AMKS by decomposing the spectral contributions of the kernel into several bands, allowing us to limit the influence of noise-dominated high-freque… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
6
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
2
2
1

Relationship

1
8

Authors

Journals

citations
Cited by 13 publications
(6 citation statements)
references
References 34 publications
0
6
0
Order By: Relevance
“…To handle this ambiguity, some methods depend on a local reference frame, such as SHOT [58], RoPS [22] and TriSi [23], some others rely on pair-wise point description, such as PFH [51] and FPFH [50]. For non rigid meshes, spectral descriptors are often used [3,8,12] given their invariance to (near-)isometric deformations. Later, data-driven approaches are proposed to compress hand-crafted features into a compact yet informative representation [26] or to learn a more robust feature description directly from point clouds [14,46].…”
Section: Related Workmentioning
confidence: 99%
“…To handle this ambiguity, some methods depend on a local reference frame, such as SHOT [58], RoPS [22] and TriSi [23], some others rely on pair-wise point description, such as PFH [51] and FPFH [50]. For non rigid meshes, spectral descriptors are often used [3,8,12] given their invariance to (near-)isometric deformations. Later, data-driven approaches are proposed to compress hand-crafted features into a compact yet informative representation [26] or to learn a more robust feature description directly from point clouds [14,46].…”
Section: Related Workmentioning
confidence: 99%
“…We base our method on the functional map framework defined in [28] which seeks to match functional spaces on the shapes instead of the shapes themselves, and has led to impressive results in the last decade. Several follow-up works [27,45,31,18,25,30] have brought substantial improvements on the original pipeline, and all heavily rely on the existence of consistent descriptor functions of shapes which are functions supposed to be preserved by the mapping, based either on local de-scriptors [43,2,8] or landmarks. Generating informative and robust descriptors in a fully automatic way remains a very challenging problem, and often requires near-isometric shapes without symmetries.…”
Section: Related Workmentioning
confidence: 99%
“…The size of the computed shape difference operators is set to k M = k N = 50 and the functional map used to compute them are of size 3k M × k M as advocated in [5]. Parameters for optimization problem (8) are µ dc = 10, µ l = 0, and µ a = µ c = 10 −4 . All the terms of Equation ( 8) have been introduced separately in previous works [28,27,5], and we refer the reader to these articles or to the supplementary material for a more in depth discussion on their effect.…”
Section: Matching Pipelinementioning
confidence: 99%
“…However, these handcrafted descriptors often lead to inaccurate and time-consuming solutions. More recently, we have seen the emergence of data-driven approaches built upon modern machine learning techniques that learn the optimal features directly from massive shape pair datasets [10], [11], [12], [13], [14], [15]. However, the major dissatisfaction here is a need for supervised learning, which relies on a sufficient number of labeled training pairs of high-quality ground truth correspondences, which are known to be scarce and difficult to obtain.…”
Section: Introductionmentioning
confidence: 99%