2016
DOI: 10.1109/tamd.2015.2476374
|View full text |Cite
|
Sign up to set email alerts
|

Learning Context on a Humanoid Robot using Incremental Latent Dirichlet Allocation

Abstract: Abstract-In this article, we formalize and model context in terms of a set of concepts grounded in the sensorimotor interactions of a robot. The concepts are modeled as a web using Markov Random Field, inspired from the concept web hypothesis for representing concepts in humans. On this concept web, we treat context as a latent variable of Latent Dirichlet Allocation (LDA), which is a widely-used method in computational linguistics for modeling topics in texts. We extend the standard LDA method in order to mak… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
31
0

Year Published

2017
2017
2021
2021

Publication Types

Select...
4
3

Relationship

5
2

Authors

Journals

citations
Cited by 18 publications
(32 citation statements)
references
References 80 publications
1
31
0
Order By: Relevance
“…In our experiments, we compare iRBM and diBM against (vanilla) RBM (with the same number of hidden units that was found by iRBM), stacked RBM (with the same number of layers and hidden units as found by siRBM), DBM (with the same number of layers and hidden units as found by diBM), incremental RBM [18], and incremental LDA [6]. In comparing the methods, we use the same number of epochs for each method.…”
Section: Experiments and Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…In our experiments, we compare iRBM and diBM against (vanilla) RBM (with the same number of hidden units that was found by iRBM), stacked RBM (with the same number of layers and hidden units as found by siRBM), DBM (with the same number of layers and hidden units as found by diBM), incremental RBM [18], and incremental LDA [6]. In comparing the methods, we use the same number of epochs for each method.…”
Section: Experiments and Resultsmentioning
confidence: 99%
“…To get a feeling of the performance of the methods, we looked at the strongest objects associated with the hidden neurons. For this, we just compared one-layer methods (iRBM, incremental RBM [18], incremental LDA [6] and online vanilla RBM) and hence, not considered DBM, diBM, stacked RBM or stacked iRBM, since the first layers of these methods (RBM and iRBM) are included in the comparison. Table I lists the three best (selected by visual inspection) hidden neurons' strongly connected objects for the different methods.…”
Section: Qualitative Inspection Of Context Coherence (Hidden Nodes)mentioning
confidence: 99%
See 1 more Smart Citation
“…First, ILSA is on the static datasets, and the dynamic datasets are not considered. Accordingly, we will study on how to building a dynamical partial index for the dynamic term set and document set by integrating existing incremental LSA algorithm [ 61 , 62 ] and incremental SVD algorithm [ 63 65 ]. Second, our approach does not pay attention to the transformation process from query into pseudo document which involves lots of unnecessary operations on entries of lower values and increases the execution time of on-line query process.…”
Section: Discussionmentioning
confidence: 99%
“…Among these studies, similar to ours, there are also models that explicitly integrate context into a scene model [14,15,17]. For example, Wang et al [14] extend LDA to incorporate relative positions between pixels in a local neighborhood in order to segment an image into semantically meaningful regions.…”
Section: Studymentioning
confidence: 99%