2015 IEEE-RAS 15th International Conference on Humanoid Robots (Humanoids) 2015
DOI: 10.1109/humanoids.2015.7363576
|View full text |Cite
|
Sign up to set email alerts
|

Multi-purpose natural language understanding linked to sensorimotor experience in humanoid robots

Abstract: Humans have an amazing ability to bootstrap new knowledge. The concept of structural bootstrapping refers to mechanisms relying on prior knowledge, sensorimotor experience, and inference that can be implemented in robotic systems and employed to speed up learning and problem solving in new environments. In this context, the interplay between the symbolic encoding of the sensorimotor information, prior knowledge, planning, and natural language understanding plays a significant role. In this paper, we show how t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
11
0

Year Published

2016
2016
2019
2019

Publication Types

Select...
4
2
2

Relationship

2
6

Authors

Journals

citations
Cited by 15 publications
(11 citation statements)
references
References 32 publications
0
11
0
Order By: Relevance
“…Here geometric considerations are primarily used to adapt existing knowledge to the new situation and robotic manipulations are only considered in connection with discovered object positions. A similar question of memorizing common object locations is addressed by Ovchinnikova et al (2015). Beetz et al (2016) do indeed talk about manipulation data reuse, but for a different purpose than in our study.…”
Section: Comparison With the State Of The Artmentioning
confidence: 56%
See 1 more Smart Citation
“…Here geometric considerations are primarily used to adapt existing knowledge to the new situation and robotic manipulations are only considered in connection with discovered object positions. A similar question of memorizing common object locations is addressed by Ovchinnikova et al (2015). Beetz et al (2016) do indeed talk about manipulation data reuse, but for a different purpose than in our study.…”
Section: Comparison With the State Of The Artmentioning
confidence: 56%
“…Alternative studies do exist that emphasize data collection from robotic experiments in both industry-oriented (Björkelund et al, 2011; Persson et al, 2010) and service robotics domains (Beetz et al, 2016; Ovchinnikova et al, 2015; Riazuelo et al, 2015; Tenorth and Beetz, 2013; Tenorth et al, 2013; Winkler et al, 2014). In the following, we will discuss how the mentioned approaches relate to our study.…”
Section: Discussionmentioning
confidence: 99%
“…Since we realized a large number of robot programs for a wide variety of applications for the robots of the ARMAR series, we gained rich experience that allows us to elaborate on advantages and disadvantages of the proposed concept. The presented statechart approach has extensively been used not only to demonstrate simple tasks like the examples in this paper but also for complex skills applied in real world scenarios, including grasping, opening and closing doors, mixing, or pouring as presented in Ovchinnikova et al (2015). We think that the decision to restrict the ArmarX statecharts to a subset of Harel's original statechart definition has been shown to benefit our statechart concept, since the removed features (inter-level-transitions, history-connector) were rarely missed but improved comprehension and reusability significantly.…”
Section: Discussionmentioning
confidence: 99%
“…Interpreted models Fig.5. An typical interpreted model for NL instruction understanding [48]. Robot memory, real-world states and human NL instructions were integrated to instruct a robotic with plan executions.…”
Section: A Modelsmentioning
confidence: 99%
“…object visual cues detected by cameras [58][59]; (4). robot sensorimotor behaviors monitored by both motion systems and computer vision systems [48]. Supported by rich information from these features, typical problems tackled in NLC include real-time communication, context-sensitive cooperation (sensor-speech alignment), machine-executable task plan generation, and implicit human request interpretation.…”
Section: A Modelsmentioning
confidence: 99%