2018
DOI: 10.3389/fnbot.2018.00011
|View full text |Cite
|
Sign up to set email alerts
|

Hierarchical Spatial Concept Formation Based on Multimodal Information for Human Support Robots

Abstract: In this paper, we propose a hierarchical spatial concept formation method based on the Bayesian generative model with multimodal information e.g., vision, position and word information. Since humans have the ability to select an appropriate level of abstraction according to the situation and describe their position linguistically, e.g., “I am in my home” and “I am in front of the table,” a hierarchical structure of spatial concepts is necessary in order for human support robots to communicate smoothly with use… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
15
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
7

Relationship

3
4

Authors

Journals

citations
Cited by 20 publications
(15 citation statements)
references
References 30 publications
0
15
0
Order By: Relevance
“…Several methods that enable robots to acquire words and spatial concepts based on sensory-motor information in an unsupervised manner have been proposed (Taniguchi A. et al, 2016 ; Isobe et al, 2017 ). Hagiwara et al ( 2016 , 2018 ) proposed a Bayesian model to acquire the hierarchical structure of spatial concepts based on the sensory-motor information of a robot in real home environments. Tangiuchi et al ( 2019 ) summarized their studies and related works on cognitive developmental robotics that can learn a language from interaction with their environment and unsupervised learning methods that enable robots to learn a language without hand-crafted training data.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Several methods that enable robots to acquire words and spatial concepts based on sensory-motor information in an unsupervised manner have been proposed (Taniguchi A. et al, 2016 ; Isobe et al, 2017 ). Hagiwara et al ( 2016 , 2018 ) proposed a Bayesian model to acquire the hierarchical structure of spatial concepts based on the sensory-motor information of a robot in real home environments. Tangiuchi et al ( 2019 ) summarized their studies and related works on cognitive developmental robotics that can learn a language from interaction with their environment and unsupervised learning methods that enable robots to learn a language without hand-crafted training data.…”
Section: Related Workmentioning
confidence: 99%
“…Nakamura et al ( 2009 ) proposed a model in which a robot acquires the concept of an object using a multimodal latent Dirichlet allocation (mLDA) consisting of image, sound, haptic, and language information obtained through human linguistic-instructions. The authors also proposed a spatial concept acquisition model consisting of position, image, and language information (Hagiwara et al, 2016 , 2018 ). These studies model concept acquisition based on multimodal information including language information within an individual agent.…”
Section: Proposed Model and Inference Algorithmmentioning
confidence: 99%
“…A further advancement of such cognitive systems allows the robots to find meanings of words by treating a linguistic input as another modality [13][14][15]. Cognitive models have recently become more complex in realizing various cognitive capabilities: grammar acquisition [16], language model learning [17], hierarchical concept acquisition [18,19], spatial concept acquisition [20], motion skill acquisition [21], and task planning [7] (see Fig. 1).…”
Section: Introductionmentioning
confidence: 99%
“…Isobe et al (2017) proposed a learning method to derive the relationship between objects and places using image features obtained by a Convolutional Neural Network (CNN) (Krizhevsky et al 2012). Hagiwara et al (2018) implemented a hierarchical clustering method for the formation of hierarchical place concepts. However, none of the above methods can sequentially learn spatial concepts from unknown environments without a map, because they rely on batch-learning algorithms.…”
Section: Introductionmentioning
confidence: 99%