2019 IEEE/CVF International Conference on Computer Vision (ICCV) 2019
DOI: 10.1109/iccv.2019.00185
|View full text |Cite
|
Sign up to set email alerts
|

Order-Aware Generative Modeling Using the 3D-Craft Dataset

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(5 citation statements)
references
References 15 publications
0
5
0
Order By: Relevance
“…We compute two metrics: the Chamfer distance between the ground truth final shape and the shape output by each model. We also compute a normalized Mistakes to Complete (MTC) score, proposed in [7]. MTC computes the average percentage of steps where a model gives wrong poses predictions.…”
Section: Setupmentioning
confidence: 99%
See 1 more Smart Citation
“…We compute two metrics: the Chamfer distance between the ground truth final shape and the shape output by each model. We also compute a normalized Mistakes to Complete (MTC) score, proposed in [7]. MTC computes the average percentage of steps where a model gives wrong poses predictions.…”
Section: Setupmentioning
confidence: 99%
“…As a community, we would like to build machines that can assist humans in constructing and assembling complex objects, such as block worlds [7], LEGO models [9], and furniture [35]. The assembly task involves a sequence of actions that move different 3D parts to desired poses.…”
Section: Introductionmentioning
confidence: 99%
“…Label-Conditioned Generative Models. To generate label-conditioned block placements, we adapt the VoxelCNN model presented in Chen et al [12]. Given a 3D patch of a scene with block type information and a global view with occupancy information, VoxelCNN predicts next block type and placement.…”
Section: Abstructions For Build() Operationsmentioning
confidence: 99%
“…Our fine-tuning runs for an additional 4 epochs and for each class we select the model with the best performance on the validation set. We use the same train-val split as Chen et al [12], but save 50% of the validation set as our test set. Averaging across categories, we achieve a top-10 accuracy of 66.0% and average 7.50 consecutively correct blocks.…”
Section: Abstructions For Build() Operationsmentioning
confidence: 99%
See 1 more Smart Citation