2020
DOI: 10.48550/arxiv.2012.00284
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Visual Identification of Articulated Object Parts

Abstract: As autonomous robots interact and navigate around real-world environments such as homes, it is useful to reliably identify and manipulate articulated objects, such as doors and cabinets. Many prior works in object articulation identification require manipulation of the object, either by the robot or a human. While recent works have addressed predicting articulation types from visual observations alone, they often assume prior knowledge of category-level kinematic motion models or sequence of observations where… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4

Citation Types

0
4
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
4

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(4 citation statements)
references
References 23 publications
0
4
0
Order By: Relevance
“…For tackling with unseen objects, Li et al [16] follow the pose estimation setting and propose a normalized coordinate space to estimate 6D pose and joint state for articulated objects. In terms of jointcentered perception tasks, several works attempt to mine joint configurations of articulated objects [11,18,33]. To investigate manipulation points for articulated objects from visual input, Mo et al attempt to define six types of action primitives and predict interactions [23].…”
Section: Related Workmentioning
confidence: 99%
“…For tackling with unseen objects, Li et al [16] follow the pose estimation setting and propose a normalized coordinate space to estimate 6D pose and joint state for articulated objects. In terms of jointcentered perception tasks, several works attempt to mine joint configurations of articulated objects [11,18,33]. To investigate manipulation points for articulated objects from visual input, Mo et al attempt to define six types of action primitives and predict interactions [23].…”
Section: Related Workmentioning
confidence: 99%
“…More recent work has started to explore training of single models for motion prediction across categories and structures [10,11,11,16]. Zeng et al [44] proposed an optical flow-based approach on RGB-D images given segmentation masks of the moving part and fixed part. They evaluate only on ground truth segmentation and do not investigate how part segmentation and detection influences the accuracy of motion prediction.…”
Section: Related Workmentioning
confidence: 99%
“…As an example, Anguelov et al [7] decompose an articulated mesh into approximate rigid parts and use Expectation Maximization (EM) to estimate part assignments and transformations. Other recent methods focus on estimating articulation of novel objects though images [8], [9], [10] or physical interaction [11], [12]. For example, Jain et al [13] learn a distribution over articulation model parameters for novel objects with different degrees of freedom.…”
Section: Related Workmentioning
confidence: 99%