2013 IEEE Third Joint International Conference on Development and Learning and Epigenetic Robotics (ICDL) 2013
DOI: 10.1109/devlrn.2013.6652560
|View full text |Cite
|
Sign up to set email alerts
|

A generative probabilistic framework for learning spatial language

Abstract: The language of space and spatial relations is a rich source of abstract semantic structure. We develop a probabilistic model that learns to understand utterances that describe spatial configurations of objects in a tabletop scene by seeking the meaning that best explains the sentence chosen. The inference problem is simplified by assuming that sentences express symbolic representations of (latent) semantic relations between referents and landmarks in space, and that given these symbolic representations, utter… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

1
20
0

Year Published

2017
2017
2019
2019

Publication Types

Select...
3
1
1

Relationship

0
5

Authors

Journals

citations
Cited by 15 publications
(21 citation statements)
references
References 14 publications
1
20
0
Order By: Relevance
“…Matuszek et al [25] introduced a probabilistic framework that employs categorial grammar to develop compositional representations of language and objects in the environment. Tellex et al [42] and Dawson et al [12] proposed probabilistic frameworks for grounding verbs and prepositions in utterances that encode spatial relationships between referents and landmarks. Siskind [35] developed a model for grounding semantics of verbs in short image sequences.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Matuszek et al [25] introduced a probabilistic framework that employs categorial grammar to develop compositional representations of language and objects in the environment. Tellex et al [42] and Dawson et al [12] proposed probabilistic frameworks for grounding verbs and prepositions in utterances that encode spatial relationships between referents and landmarks. Siskind [35] developed a model for grounding semantics of verbs in short image sequences.…”
Section: Related Workmentioning
confidence: 99%
“…Noun Phrase (NP) = Determiner + Noun (N) 11. It measures homogeneity (i.e., optimal case: each cluster (separate word category) contains fewer classes of tags) and completeness (i.e., optimal case: classes of tags referring to the same cluster are equal) of clusters and classes[30] 12. It measures the variation of information of a clustering solution, so that the more the clustering is complete (i.e., high V-Measure), the lower the VI-Measure would be[26].…”
mentioning
confidence: 99%
“…In several previous studies, probabilistic models have been used for language grounding [1,10,41]. However, to the best of our knowledge, none of them included unknown synonyms and they differed in their approaches, experimental setups, or corpora from the current study, which makes the comparison of results between our study and these studies, among many others in the literature, difficult to attain.…”
Section: Resultsmentioning
confidence: 92%
“…Four different word representations have been investigated (Section III-E). The obtained F1-scores show that combining syntactic-semantic vectors and POS tags achieves the best overall grounding performance with and without the article the (Figure 8) 10 . For the Word Vector + POS Tags and Word Vector representations the model did not learn the Others modality (the article the), which might be due to the interand intra-modality distances in the employed vector space (Table III).…”
Section: Resultsmentioning
confidence: 95%
See 1 more Smart Citation