2016
DOI: 10.1007/978-3-319-49409-8_20
|View full text |Cite
|
Sign up to set email alerts
|

VConv-DAE: Deep Volumetric Shape Learning Without Object Labels

Abstract: With the advent of affordable depth sensors, 3D capture becomes more and more ubiquitous and already has made its way into commercial products. Yet, capturing the geometry or complete shapes of everyday objects using scanning devices (e.g. Kinect) still comes with several challenges that result in noise or even incomplete shapes. Recent success in deep learning has shown how to learn complex shape distributions in a data-driven way from large scale 3D CAD Model collections and to utilize them for 3D processing… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
155
1
1

Year Published

2018
2018
2019
2019

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 223 publications
(157 citation statements)
references
References 26 publications
0
155
1
1
Order By: Relevance
“…The volumetric representation is processed by 3D ShapeNets to identify the observed shape, the free space and the occluded space. The method presented by [31] proposes a network for deep volumetric shape learning. Given a collection of shapes of various objects and their different poses, the network learns the distributions of shapes of various classes by predicting the missing sections.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…The volumetric representation is processed by 3D ShapeNets to identify the observed shape, the free space and the occluded space. The method presented by [31] proposes a network for deep volumetric shape learning. Given a collection of shapes of various objects and their different poses, the network learns the distributions of shapes of various classes by predicting the missing sections.…”
Section: Related Workmentioning
confidence: 99%
“…Although these methods have shown promising results on 3D shapes’ extraction, in most cases, they are limited to specific objects. Furthermore, these methods do not present polyhedral structures’ extraction on buildings or outdoor scenes [31,32,33]. On the other hand, the approaches that have promising results in outdoor scenes, in most cases, are those limited to plane sections’ extraction without providing polyhedral structures’ extraction [15,29].…”
Section: Related Workmentioning
confidence: 99%
“…The decoder uses two deconvolution layers: the filter size of the first layers and feature map were set to )(i,j,k=6,thickmathspacethickmathspacefout=64, with a stride of two, and the second layer has )(i,j,k=7,thickmathspacethickmathspacefout=1thickmathspace with a stride of three. This modified network was named AE‐CNN, according to the autoencoder introduced in [15].…”
Section: Methodsmentioning
confidence: 99%
“…In addition, autoencoder‐based deep learning approaches have been used to enhance the performance of 3D shape reconstruction. Sharma et al [15] developed an end‐to‐end reconstruction technique, known as a fully volumetric convolutional denoising autoencoder (VConv‐DAE). They used convolution layers to obtain a latent representation of the input object and a learnable upsampling convolution filter (i.e.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation