2011
DOI: 10.1007/s10339-010-0386-4
|View full text |Cite
|
Sign up to set email alerts
|

Multimodal cognitive interface for robot navigation

Abstract: To build effective interactions between humans and robots, they should have common ground of understanding that creates realistic expectations and forms the basis communications. An emerging approach to doing this is to create cognitive models of human reasoning and behavior selection. We have developed a robot navigation system that uses both spatial language and graphical representation to describe route-based navigation tasks for a mobile robot. Our proposed route instruction language (RIL) is intended as a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2013
2013
2021
2021

Publication Types

Select...
2
2

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(3 citation statements)
references
References 16 publications
(16 reference statements)
0
3
0
Order By: Relevance
“…Progress in unseen spaces has relied on using constrained subsets of human symbols, or on limiting how far the robot can explore outside of already seen spaces. The guarantee of sequence in route instructions has been exploited with symbol subsets ranging from pointing gestures [13] and input restricted to artificial instruction sets like "$GO()" and "$FOLLOW()" [14], all the way to natural language [43] and free-hand sketches [15]. A limited semantic vocabulary consisting of four prepositions has been used to improve existing navigation performance in observed spaces [11], [44].…”
Section: Use Of Human Symbols In Robot Navigationmentioning
confidence: 99%
“…Progress in unseen spaces has relied on using constrained subsets of human symbols, or on limiting how far the robot can explore outside of already seen spaces. The guarantee of sequence in route instructions has been exploited with symbol subsets ranging from pointing gestures [13] and input restricted to artificial instruction sets like "$GO()" and "$FOLLOW()" [14], all the way to natural language [43] and free-hand sketches [15]. A limited semantic vocabulary consisting of four prepositions has been used to improve existing navigation performance in observed spaces [11], [44].…”
Section: Use Of Human Symbols In Robot Navigationmentioning
confidence: 99%
“…The spatial information is anchored to the semantic information and the approach is validated via experiments where a mobile robot uses and infers new semantic information from its environment, improving its operation. Similarly Elmogy in [23] investigates how a topological map is generated to describe relationships among features of the environment in a more abstract form to be used in a robot navigation system. A language for instructing the robot to execute a route in an indoor environment is presented where an instruction interpreter processes a route description and generates its equivalent symbolic and topological map representations.…”
Section: Perceptual Anchoringmentioning
confidence: 99%
“…The module can further improve the interaction process by reducing the workload for the user and make the input signal more reliable. Many works [62][60][59] [63] have proved that communication with multiple inputs can improve the quality of human-robot communication. This is due to several reasons.…”
Section: Framework Discussionmentioning
confidence: 99%