2010
DOI: 10.1613/jair.2962
|View full text |Cite
|
Sign up to set email alerts
|

Training a Multilingual Sportscaster: Using Perceptual Context to Learn Language

Abstract: We present a novel framework for learning to interpret and generate language using only perceptual context as supervision. We demonstrate its capabilities by developing a system that learns to sportscast simulated robot soccer games in both English and Korean without any language-specific prior knowledge. Training employs only ambiguous supervision consisting of a stream of descriptive textual comments and a sequence of events extracted from the simulation trace. The system simultaneously establishes correspon… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
70
0

Year Published

2011
2011
2017
2017

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 64 publications
(70 citation statements)
references
References 48 publications
0
70
0
Order By: Relevance
“…a) Natural Language Generation:: Natural Language Generation from time-series data has been investigated for various tasks, including, but not limited to, weather forecast generation [9], [10], [11], sportcasting [12], [13], [14], narrative generation to assist children with communication needs [15], student feedback generation [16]and report generation from students data [17] and report generation from medical time-series data [18] [19][20] [4].…”
Section: Related Workmentioning
confidence: 99%
“…a) Natural Language Generation:: Natural Language Generation from time-series data has been investigated for various tasks, including, but not limited to, weather forecast generation [9], [10], [11], sportcasting [12], [13], [14], narrative generation to assist children with communication needs [15], student feedback generation [16]and report generation from students data [17] and report generation from medical time-series data [18] [19][20] [4].…”
Section: Related Workmentioning
confidence: 99%
“…In seminal work on grounded language learning, Siskind [19] focused on learning word meaning, but not grammatical structure. In a series of work, Mooney and colleagues [9,4,3,11] learn to match phrases to elements of a context. Their goals are similar to ours, but an essential difference is that they only map a phrase to a single element in the context; that is, the meaning of the phrase must be a single element of the context.…”
Section: Background and Related Workmentioning
confidence: 99%
“…Their goals are similar to ours, but an essential difference is that they only map a phrase to a single element in the context; that is, the meaning of the phrase must be a single element of the context. Chen et al [3] acknowledge this limitation, and mention inductive logic programming (ILP) as a possible approach to learning more complex meanings, to be explored in future work. This work partially fills that gap.…”
Section: Background and Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Thus, some work have been done to integrate some semantic information in the language learning process. The work from Chen et al [2] is one example of this way of learning language models. Nevertheless, in this approach the meaning of each sentence has to be provided for each example of the training set.…”
Section: Introductionmentioning
confidence: 99%