2018 IEEE International Conference on Robotics and Automation (ICRA) 2018
DOI: 10.1109/icra.2018.8460606
|View full text |Cite
|
Sign up to set email alerts
|

Joining High-Level Symbolic Planning with Low-Level Motion Primitives in Adaptive HRI: Application to Dressing Assistance

Abstract: For a safe and successful daily living assistance, far from the highly controlled environment of a factory, robots should be able to adapt to ever-changing situations. Programming such a robot is a tedious process that requires expert knowledge. An alternative is to rely on a high-level planner, but the generic symbolic representations used are not well suited to particular robot executions. Contrarily, motion primitives encode robot motions in a way that can be easily adapted to different situations. This pap… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
18
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
3

Relationship

3
5

Authors

Journals

citations
Cited by 24 publications
(18 citation statements)
references
References 15 publications
0
18
0
Order By: Relevance
“…Pignat and Calinon (2017) and Canal et al (2018) use learning from demonstration (LfD) to have a human teach the robot how to perform dressing tasks. Canal et al (2018) combines motion primitives with higher-level symbolic task planning to represent dressing tasks and demonstrate their method for putting a shoe onto a person's foot. A potential advantage of LfD is that individuals can directly teach the robot, which gives them the opportunity to communicate their preferences.…”
Section: Learning From Demonstrationmentioning
confidence: 99%
“…Pignat and Calinon (2017) and Canal et al (2018) use learning from demonstration (LfD) to have a human teach the robot how to perform dressing tasks. Canal et al (2018) combines motion primitives with higher-level symbolic task planning to represent dressing tasks and demonstrate their method for putting a shoe onto a person's foot. A potential advantage of LfD is that individuals can directly teach the robot, which gives them the opportunity to communicate their preferences.…”
Section: Learning From Demonstrationmentioning
confidence: 99%
“…Work by Gao et al and Zhang et al used vision to construct a model of a user's range of motion and a dynamic trajectory plan to dress that user in a vest [7]- [9]. The I-dress project has proposed several robotic dressing assistance techniques including: a learning-from-demonstration approach applied to dressing a jacket sleeve and shoe [10], [11], dressing data analysis techniques for classifying dressing errors and distinguishing different underlying garment layers [12], and a dressing state machine controlled by user gestures and voice commands demonstrated by dressing a user in loose rubber shoes [13].…”
Section: A Robot-assisted Dressingmentioning
confidence: 99%
“…Real-time updates of the trajectory also ensure a one-shot dressing, which avoids multiple trials and errors in [5,6,10,11] that may put the users at risk. The proposed method also avoids markers attached on users or clothes [11,12,45] that may cause inconvenience in daily dressing scenarios.…”
Section: B Human Posture Trackingmentioning
confidence: 99%