2018 27th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN) 2018
DOI: 10.1109/roman.2018.8525527
|View full text |Cite
|
Sign up to set email alerts
|

From Perception to Semantics: An Environment Representation Model Based on Human-Robot Interactions

Abstract: A robot, in order to be autonomous, needs some kind of representation of its surrounding environment. From a general point of view, basic robotic tasks (such as localization, mapping, object handling, etc.) can be carried out with only very simple geometric primitives, usually extracted from raw sensor data. But whenever an interaction with a human being is involved, robots must have an understanding of concepts expressed in human natural language. In most approaches, this is done through a prebuilt ontology. … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
5
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
3
2

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(5 citation statements)
references
References 28 publications
0
5
0
Order By: Relevance
“…This paper is an extension of our previous work [8]. We propose an heuristics-driven method for automatic generation of an object-based ontology from dictionary definitions.…”
Section: Environment Modeling : a Multimodal Approachmentioning
confidence: 95%
“…This paper is an extension of our previous work [8]. We propose an heuristics-driven method for automatic generation of an object-based ontology from dictionary definitions.…”
Section: Environment Modeling : a Multimodal Approachmentioning
confidence: 95%
“…There is a growing need for robots and other intelligent agents to have safe interactions with partners, mainly human beings. In this regard, the need for perceptual semantics formulated using affordance learning and object grounding is vital for human-robot interaction (HRI) [14][15][16].…”
Section: Related Workmentioning
confidence: 99%
“…Therefore, in this study, we focused on both visual and lingual cues. In addition to visual and lingual cues, Breux et al [16] considered ontologies based on WordNet to extract the action cues and ground the relationships between objects and features (properties). This improved the results and HRI but covered only seven types of relationships (isA, hasA, prop, usedFor, on, linked-to, and homonym), which limits the agent's recognition and understanding capabilities to the stated semantic associations.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations