2015 IEEE International Conference on Robotics and Automation (ICRA) 2015
DOI: 10.1109/icra.2015.7139728
|View full text |Cite
|
Sign up to set email alerts
|

Learning symbolic representations of actions from human demonstrations

Abstract: Abstract-In this paper, a robot learning approach is proposed which integrates Visuospatial Skill Learning, Imitation Learning, and conventional planning methods. In our approach, the sensorimotor skills (i.e., actions) are learned through a learning from demonstration strategy. The sequence of performed actions is learned through demonstrations using Visuospatial Skill Learning. A standard action-level planner is used to represent a symbolic description of the skill, which allows the system to represent the s… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
43
0

Year Published

2015
2015
2023
2023

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 52 publications
(43 citation statements)
references
References 20 publications
0
43
0
Order By: Relevance
“…These developments have led to a growing interest in making it easy for domain experts to transfer knowledge to collaborative robots, either through a user interface [2], [3], [4], [5], natural language [6], or learning from demonstration [7], [8], [9], [10]. To take full advantage of these systems, the human user must have an accurate mental model of a robot's capabilities [11].…”
Section: Introductionmentioning
confidence: 99%
See 2 more Smart Citations
“…These developments have led to a growing interest in making it easy for domain experts to transfer knowledge to collaborative robots, either through a user interface [2], [3], [4], [5], natural language [6], or learning from demonstration [7], [8], [9], [10]. To take full advantage of these systems, the human user must have an accurate mental model of a robot's capabilities [11].…”
Section: Introductionmentioning
confidence: 99%
“…Recent approaches for end-user instruction of collaborative robots include the development of new user interfaces [2], [3], [4], learning from demonstration [7], [10], or systems that make use of natural language together with ontologies and large knowledge bases to follow high-level instructions, such as Tell Me Dave [6] or RoboSherlock [12].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Moreover, the robot often tries to learn the teacher's goal [4], [12] and reproduces it from different initial states. Once the goal changes, the teacher needs to demonstrate a new action sequence.…”
Section: Related Workmentioning
confidence: 99%
“…To learn high-level conditions, the robot uses a perception system (e.g. SIFT [12] or a database of object features [15]) that recognises object properties in the state of the world. In our experiments, we implemented a simple python algorithm with integrated functionalities of the Robot Operating System (ROS), to detect and move objects, based on their colour.…”
Section: Overviewmentioning
confidence: 99%