2016
DOI: 10.1016/j.cag.2015.07.018
|View full text |Cite
|
Sign up to set email alerts
|

Continuous semantic description of 3D meshes

Abstract: International audienceWe propose a novel high-level signature for continuous semantic description of 3D shapes. Given an approximately segmented and labeled 3D mesh, our descriptor consists of a set of geodesic distances to the different semantic labels. This local multidimensional signature effectively captures both the semantic information (and relationships between labels) and the underlying geometry and topology of the shape. We illustrate its benefits on two applications: automatic semantic labeling, seen… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
4
0

Year Published

2017
2017
2022
2022

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(4 citation statements)
references
References 57 publications
0
4
0
Order By: Relevance
“…The system can infer relationships implied in the text and yield common model placement patterns based on conditional probabilities. 14,15 Stanford University extended the 2D relational dataset to the 3D space by combining scene images with 3D space, semantics, and cameras to build a spatial semantic framework structure layer by layer through scene analysis in 2019. In 2020, the MIT SPARK Lab proposed the concept of dynamic 3D scene maps based on research from Stanford University.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…The system can infer relationships implied in the text and yield common model placement patterns based on conditional probabilities. 14,15 Stanford University extended the 2D relational dataset to the 3D space by combining scene images with 3D space, semantics, and cameras to build a spatial semantic framework structure layer by layer through scene analysis in 2019. In 2020, the MIT SPARK Lab proposed the concept of dynamic 3D scene maps based on research from Stanford University.…”
Section: Related Workmentioning
confidence: 99%
“…In 2014, Stanford University proposed a scene generation system supporting partially interactable scene manipulation and active learning. The system can infer relationships implied in the text and yield common model placement patterns based on conditional probabilities 14,15 …”
Section: Related Workmentioning
confidence: 99%
“…As preprocessing, we manually charted the SMPL and the chimp meshes into L = 32 semantically-corresponding parts to guide the mapping. Then, for each vertex p of each mesh S, we extracted an adapted version of the continuous semantic descriptor d(p) proposed by Léon et al [25]:…”
Section: Annotation Through 3d Shape Re-mappingmentioning
confidence: 99%
“…In particular, the proposed decomposition provides an automatic pre-processing for applications such as structuring a 3D shape (e.g. like [3] whose semantic descriptors relies on a decomposition of the shape), or similarity detection in a 3D shape, as required by Guy et al [4].…”
mentioning
confidence: 99%