Proceedings of the Eighteenth Conference on Computational Natural Language Learning 2014
DOI: 10.3115/v1/w14-1606
|View full text |Cite
|
Sign up to set email alerts
|

Inducing Neural Models of Script Knowledge

Abstract: Induction of common sense knowledge about prototypical sequence of events has recently received much attention (e.g., Chambers and Jurafsky (2008);Regneri et al. (2010)). Instead of inducing this knowledge in the form of graphs, as in much of the previous work, in our method, distributed representations of event realizations are computed based on distributed representations of predicates and their arguments, and then these representations are used to predict prototypical event orderings. The parameters of the … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
66
0

Year Published

2015
2015
2021
2021

Publication Types

Select...
6
2

Relationship

2
6

Authors

Journals

citations
Cited by 56 publications
(66 citation statements)
references
References 10 publications
0
66
0
Order By: Relevance
“…Several statistical methods have been proposed to automatically learn scripts or scripts-like structures from unstructured text (Chambers andJurafsky, 2008, 2009;Jans et al, 2012;Orr et al, 2014;Pichotta and Mooney, 2014). Such methods for script-learning also include Bayesian approaches (Bejan, 2008;Frermann et al, 2014), sequence alignment algorithms (Regneri et al, 2010) and neural networks (Modi and Titov, 2014;Granroth-Wilding and Clark, 2016;Pichotta and Mooney, 2016). There has also been work on representing events in a structured manner using schemas, which are learned probabilistically (Chambers, 2013;Cheung et al, 2013;Nguyen et al, 2015), using graphs (Balasubramanian et al, 2013) or neural approaches (Titov and Khoddam, 2015).…”
Section: Events-centered Learningmentioning
confidence: 99%
“…Several statistical methods have been proposed to automatically learn scripts or scripts-like structures from unstructured text (Chambers andJurafsky, 2008, 2009;Jans et al, 2012;Orr et al, 2014;Pichotta and Mooney, 2014). Such methods for script-learning also include Bayesian approaches (Bejan, 2008;Frermann et al, 2014), sequence alignment algorithms (Regneri et al, 2010) and neural networks (Modi and Titov, 2014;Granroth-Wilding and Clark, 2016;Pichotta and Mooney, 2016). There has also been work on representing events in a structured manner using schemas, which are learned probabilistically (Chambers, 2013;Cheung et al, 2013;Nguyen et al, 2015), using graphs (Balasubramanian et al, 2013) or neural approaches (Titov and Khoddam, 2015).…”
Section: Events-centered Learningmentioning
confidence: 99%
“…Further improvements include incorporating more information and more complicated models (Radinsky and Horvitz, 2013;Modi and Titov, 2014;Ahrendt and Demberg, 2016). Recent researches tried to solve event prediction problem by transforming it into an language modeling paradigm (Pichotta and Mooney, 2014Rudinger et al, 2015;Hu et al, 2017).…”
Section: Related Workmentioning
confidence: 99%
“…In recent years, many methods have been proposed for commonsense machine comprehension. However, these methods mostly either focus on matching explicit information in given texts (Weston et al, 2014;Wang and Jiang, 2016a,b;Zhao et al, 2017), or paid attention to one specific kind of commonsense knowledge, such as event temporal relation (Chambers and Jurafsky, 2008;Modi and Titov, 2014;Pichotta and Mooney, 2016b;Hu et al, 2017) and event causality (Do et al, 2011;Radinsky et al, 2012;Hashimoto et al, 2015;Gui et al, 2016). As discussed above, it is obvious that commonsense machine comprehension problem is far from settled by considering only explicit or a single kind of commonsense knowledge.…”
Section: Introductionmentioning
confidence: 99%
“…We also advised participants to make use of other representations for script knowledge, such as narrative chains (Chambers and Jurafsky, 2008), or event embeddings (Modi and Titov, 2014).…”
Section: Script and Commonsense Knowledge Datamentioning
confidence: 99%
“…In the past, script modeling systems have been evaluated using intrinsic tasks such as event ordering (Modi and Titov, 2014), paraphrasing (Regneri et al, 2010;Wanzare et al, 2017), event prediction (namely, the narrative cloze task) Jurafsky, 2008, 2009;Rudinger et al, 2015b;Modi, 2016) or story completion (e.g. the story cloze task T It was a long day at work and I decided to stop at the gym before going home.…”
Section: Introductionmentioning
confidence: 99%