2009 International Joint Conference on Neural Networks 2009
DOI: 10.1109/ijcnn.2009.5178788
|View full text |Cite
|
Sign up to set email alerts
|

A cross-situational algorithm for learning a lexicon using Neural modeling fields

Abstract: Cross-situational learning is based on the idea that a learner can determine the meaning of a word by finding something in common across all observed uses of that word. Although cross-situational learning is usually modeled through stochastic guessing games in which the input data vary erratically with time (or rounds of the game), here we investigate the possibility of applying the deterministic Neural Modeling Fields (NMF) categorization mechanism to infer the correct object-word mapping. Two different repre… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2009
2009
2019
2019

Publication Types

Select...
3
3
1

Relationship

0
7

Authors

Journals

citations
Cited by 10 publications
(4 citation statements)
references
References 43 publications
0
4
0
Order By: Relevance
“…Therefore, usually it is not obvious which label-word refers to which situation. This case of partial supervision is sometimes modeled as cross-situational learning (Fontanari et al, 2009). In future we will address similar learning of situations in parallel with proper associations among word-labels and situations.…”
Section: Discussionmentioning
confidence: 99%
“…Therefore, usually it is not obvious which label-word refers to which situation. This case of partial supervision is sometimes modeled as cross-situational learning (Fontanari et al, 2009). In future we will address similar learning of situations in parallel with proper associations among word-labels and situations.…”
Section: Discussionmentioning
confidence: 99%
“…a word, through perceptual information [18]. Previous studies that investigated the use of crosssituational learning for grounding of objects [13,40] as well as spatial concepts [2,10,41] ensured that one word appears several times together with the same perceptual feature vector so that a corresponding mapping can be created [14]. However, natural language is ambiguous due to homonymy, i.e.…”
Section: A Groundingmentioning
confidence: 99%
“…To ground manipulation actions in an unsupervised manner, i.e. without the need for a tutor, CSL (Section 2.2) can be used, which assumes that one word appears several times together with the same perceptual feature vector so that a corresponding mapping can be created (Siskind, 1996;Fontanari et al, 2009b;Smith et al, 2011). Previous studies investigated the use of CSL for grounding of objects and actions (Fontanari et al, 2009a;Taniguchi et al, 2017) as well as spatial concepts (Tellex et al, 2011;Dawson et al, 2013;Aly et al, 2017).…”
Section: Groundingmentioning
confidence: 99%