2017
DOI: 10.1016/j.artint.2015.08.009
|View full text |Cite
|
Sign up to set email alerts
|

Transferring skills to humanoid robots by extracting semantic representations from observations of human activities

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
59
0

Year Published

2017
2017
2021
2021

Publication Types

Select...
5
3
1

Relationship

2
7

Authors

Journals

citations
Cited by 108 publications
(59 citation statements)
references
References 17 publications
0
59
0
Order By: Relevance
“…The average accuracy for the on-line segmentation and recognition of the overall activities for the scenarios shown in Table I is 84.8% for both hands. In our previous work [23], we concluded that the correct segmentation and recognition of human activities is not unique and greatly depends on the person interpreting the motions, especially when both hands are involved. The following link presents a video with more details about all these experimental results: http://web.ics.ei.tum.de/∼karinne/Videos/RamirezDChumanoids15.avi VIII.…”
Section: B Robot Execution Resultsmentioning
confidence: 99%
“…The average accuracy for the on-line segmentation and recognition of the overall activities for the scenarios shown in Table I is 84.8% for both hands. In our previous work [23], we concluded that the correct segmentation and recognition of human activities is not unique and greatly depends on the person interpreting the motions, especially when both hands are involved. The following link presents a video with more details about all these experimental results: http://web.ics.ei.tum.de/∼karinne/Videos/RamirezDChumanoids15.avi VIII.…”
Section: B Robot Execution Resultsmentioning
confidence: 99%
“…The ability to perform actions based on observations of human activities is one of the major challenges to increase the capabilities of robotic systems [1]. Over the past few years, this problem has been of great interest to researchers and remains an active field in robotics [2]. By understanding human actions, robots may be able to acquire new skills, or perform different tasks, without the need for tedious programming.…”
Section: Introductionmentioning
confidence: 99%
“…Another important aspect of our system is its scalability and adaptability toward different domains as presented in [5], [32]. It is important to highlight that the semantic rules are obtained off-line and can be re-used directly in multiple domains as an on-line component.…”
Section: B Activity Inference From Robot Demonstrationsmentioning
confidence: 99%
“…In addition, we introduce a multi-modal control approach to enable different dynamic behaviors for standard industrial robots. The integration of these components is done with our novel semantic reasoning framework [5] to teach kinesthetically new activities to robots, see Fig. 1.…”
Section: Introductionmentioning
confidence: 99%