2017
DOI: 10.1177/0278364916683444
|View full text |Cite
|
Sign up to set email alerts
|

Goal-directed robot manipulation through axiomatic scene estimation

Abstract: Performing robust goal-directed manipulation tasks remains a crucial challenge for autonomous robots. In an ideal case, shared autonomous control of manipulators would allow human users to specify their intent as a goal state and have the robot reason over the actions and motions to achieve this goal. However, realizing this goal remains elusive due to the problem of perceiving the robot’s environment. We address and describe the problem of axiomatic scene estimation for robot manipulation in cluttered scenes … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
24
0

Year Published

2017
2017
2025
2025

Publication Types

Select...
4
3
1

Relationship

3
5

Authors

Journals

citations
Cited by 33 publications
(24 citation statements)
references
References 57 publications
0
24
0
Order By: Relevance
“…Liu et al [23] also estimate a scene graph given observations, however, their approach approximates objects as oriented bounding boxes. Sui et al proposed a generative approach (AxMC) [35] for scene graph estimation and use Markov Chain Monte Carlo (MCMC) to search for the best scene graph hypothesis that explains the observations. Both D2P and AxMC assume that the robot knows what objects are present in the scene, and objects are standing in their upright poses, thus both methods can only estimate 3 DOF poses of objects (i.e., x, y, θ ).…”
Section: B Scene Perception For Manipulationmentioning
confidence: 99%
See 2 more Smart Citations
“…Liu et al [23] also estimate a scene graph given observations, however, their approach approximates objects as oriented bounding boxes. Sui et al proposed a generative approach (AxMC) [35] for scene graph estimation and use Markov Chain Monte Carlo (MCMC) to search for the best scene graph hypothesis that explains the observations. Both D2P and AxMC assume that the robot knows what objects are present in the scene, and objects are standing in their upright poses, thus both methods can only estimate 3 DOF poses of objects (i.e., x, y, θ ).…”
Section: B Scene Perception For Manipulationmentioning
confidence: 99%
“…Towards natural and intuitive modes of human-robot communication, we present the Semantic Robot Programming (SRP) paradigm for declarative robot programming over user demonstrated scenes. In SRP, we assume a robot is capable of goal-directed manipulation [35] for realizing an arbitrary scene state in the world. A user can program such goal-directed robots by demonstrating their desired goal scene.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Zhen et al [43] generated scene hypotheses based on object detections returned by R-CNN [10], and they used Bayesian based bootstrap filter to estimate object poses. Similarly, Sui et al [35] and Narayanan et al [24] proposed generative approach for object pose estimation given RGB-D observation. Discriminative object pose estimation methods use local [14], [28] or global [29], [1] descriptors to estimate object poses via feature matching.…”
Section: Related Workmentioning
confidence: 99%
“…Previously, we addressed the problem of perception for goal-directed manipulation as axiomatic scene estimation [32], [33], sharing similar aims to existing work in scene estimation for manipulation of rigid objects [22], [23], [19], [17], [6]. These methods take a generative multi-hypothesis approach to robustly inferring a tree-structured scene graph, as object poses and directed inter-object relations, from cluttered scenes observed as 3D point clouds.…”
Section: Introductionmentioning
confidence: 99%