2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2020
DOI: 10.1109/iros45743.2020.9340764
|View full text |Cite
|
Sign up to set email alerts
|

Latent Space Roadmap for Visual Action Planning of Deformable and Rigid Object Manipulation

Abstract: Haptic feedback is essential for humans to successfully perform complex and delicate manipulation tasks. A recent rise in tactile sensors has enabled robots to leverage the sense of touch and expand their capability drastically. However, many tasks still need human intervention/guidance. For this reason, we present a teleoperation framework designed to provide haptic feedback to human operators based on the data from camera-based tactile sensors mounted on the robot gripper. Partial autonomy is introduced to p… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

4
73
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
2
1

Relationship

1
5

Authors

Journals

citations
Cited by 39 publications
(77 citation statements)
references
References 34 publications
4
73
0
Order By: Relevance
“…Instead an important work in this regard but in the field of control is Embed to Control [6], where the authors build on insights from the optimal control formulation to leverage a variational autoencoder (VAE), that learns to generate image trajectories from a latent space in which the dynamics is constrained to be locally linear. The work in [1] also uses VAEs but with augmented loss function to generate Visual Action Plans making use of a Latent Space Roadmap that produces a sequence of images as well as connecting actions given only a start and goal image. However, since these approaches do not operate in a semantically-interpretable latent space, it is difficult to determine whether plans or sequences of actions are satisfying a given set of specifications.…”
Section: A Related Workmentioning
confidence: 99%
See 4 more Smart Citations
“…Instead an important work in this regard but in the field of control is Embed to Control [6], where the authors build on insights from the optimal control formulation to leverage a variational autoencoder (VAE), that learns to generate image trajectories from a latent space in which the dynamics is constrained to be locally linear. The work in [1] also uses VAEs but with augmented loss function to generate Visual Action Plans making use of a Latent Space Roadmap that produces a sequence of images as well as connecting actions given only a start and goal image. However, since these approaches do not operate in a semantically-interpretable latent space, it is difficult to determine whether plans or sequences of actions are satisfying a given set of specifications.…”
Section: A Related Workmentioning
confidence: 99%
“…We propose an architecture that learns to distinguish between system executions that satisfy the user-defined requirements and undesired executions from provided demonstrations. The architecture extends the visual action planning framework in [1] by integrating binary classifiers to evaluate the satisfaction of the constraints without sacrificing the benefits of data-driven low-dimensional latent space representations.…”
Section: A Related Workmentioning
confidence: 99%
See 3 more Smart Citations