2017
DOI: 10.1007/978-3-319-50115-4_28
|View full text |Cite
|
Sign up to set email alerts
|

Integrated Intelligence for Human-Robot Teams

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
19
0

Year Published

2018
2018
2020
2020

Publication Types

Select...
4
2
1

Relationship

2
5

Authors

Journals

citations
Cited by 20 publications
(19 citation statements)
references
References 16 publications
0
19
0
Order By: Relevance
“…The sequential execution of the navigate, search, and observe actions constitutes a complete mission, examples of which are presented with more detail in recent work by Oh et al 3 . Here we present the methodology and results of the assessments of each action as tested separately.…”
Section: Observementioning
confidence: 98%
See 1 more Smart Citation
“…The sequential execution of the navigate, search, and observe actions constitutes a complete mission, examples of which are presented with more detail in recent work by Oh et al 3 . Here we present the methodology and results of the assessments of each action as tested separately.…”
Section: Observementioning
confidence: 98%
“…These experiments tested semantic navigation and perception, human-robot interaction, door detection, and pedestrian detection and tracking. The results of the tests of human-robot interaction are presented in Hill et al 2 , and the results of the complete runs, are presented Oh et al 3 . Here, we present results from the experiments testing semantic navigation and perception, door detection, and pedestrian detection and tracking.…”
Section: Introductionmentioning
confidence: 97%
“…Earlier work in this area includes that of Duvallet et al (2013), which learns to follow navigational instructions in unknown environments based upon human demonstrations, as well as recent work on language-based visual navigation in novel environments (Anderson et al, 2018;Mei et al, 2016a). More closely related to our framework are methods that leverage metric and semantic information implicit or explicit in the command to learn a distribution over world models that facilitates natural language understanding in a priori unknown environments (Duvallet et al, 2014;Oh et al, 2016;Walter et al, 2014b). We address a different element of ''partial observability'' by inferring the state of Fig.…”
Section: Related Workmentioning
confidence: 99%
“…Chung et al [11] use HDCG on ground vehicles to implement navigation commands and demonstrate performance improvements over G 3 in terms of running time, factor evaluations, and correctness. Oh et al [12] integrate HDCG with their navigating robot system, The overall pipeline of our approach highlighting the NLP parsing module and the motion planner. Above the dashed line (from left to right): Dynamic Grounding Graphs (DGG) with latent parameters that are used to parse and interpret the natural language commands, generation of optimization-based planning formulation with appropriate constraints and parameters using our mapping algorithm.…”
Section: A Natural Language Processingmentioning
confidence: 99%
“…Most prior methods that combine NLP and motion planning have focused on understanding natural language instructions to compute robot motion for simple environments and constraints. Most of these methods are limited to navigation applications [12], [11], [6] or simple settings [7], or they are not evaluated on real robots [10]. Nyga et al [26], [27], [28], [29] use probabilistic relation models based on knowledge bases to understand natural language commands that describe visual attributes of objects.…”
Section: Benefits and Comparisonsmentioning
confidence: 99%