2017 International Conference on Advances in Computing, Communications and Informatics (ICACCI) 2017
DOI: 10.1109/icacci.2017.8126035
|View full text |Cite
|
Sign up to set email alerts
|

Learning from demonstration algorithm for cloth folding manipulator

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
7
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 9 publications
(7 citation statements)
references
References 11 publications
0
7
0
Order By: Relevance
“…Sannapaneni et al [63] developed an algorithm for learning a folding sequence from the visual detection of special markers attached to key points of the cloth during a demonstration. The learned sequence is then generalized to handle different sizes of clothes which have the same shape as in the demonstration.…”
Section: Planar Objectsmentioning
confidence: 99%
See 1 more Smart Citation
“…Sannapaneni et al [63] developed an algorithm for learning a folding sequence from the visual detection of special markers attached to key points of the cloth during a demonstration. The learned sequence is then generalized to handle different sizes of clothes which have the same shape as in the demonstration.…”
Section: Planar Objectsmentioning
confidence: 99%
“…Sannapaneni et al [63] developed a system that learns a folding task from visual demonstrations. Special markers are attached to key points of the object while it is folded by a human operator.…”
Section: Learned Controlmentioning
confidence: 99%
“…Matas [11] shows that even only a handful of human demonstrations is enough to increase the folding performance, however does not disclose how the demonstration data is collected. In [19] users place physical round black objects on the cloths to label the grasp points. Tanaka [18] created a 2D Graphical User Interface (GUI) where a user could choose two grasp points and a vector representing the displacement for a two-arm humanoid robot.…”
Section: B Interfaces For Demonstrating Cloth Foldingmentioning
confidence: 99%
“…Existing data collection methods for human folding demonstrations include selecting grasp points in real images [5], [18] or simulated images [11], providing a sequence of images as sub-goals [16], hard-coded paths pre-defined by humans [17], placing physical markers on clothes [19] and remote teleoperation [20]. Evidenced by the large variety of methods, it is clear that there is no standardization in the literature on how to collect human demonstration data.…”
Section: Introductionmentioning
confidence: 99%
“…Recently, several learning-based approaches to robotic folding were proposed, e.g. [10], [11], [12], [13], [14]. However, these methods have been tested only on a single fabric material and the accuracy has not been measured.…”
Section: Arxiv:190401298v1 [Csro] 2 Apr 2019mentioning
confidence: 99%