Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics 2014
DOI: 10.3115/v1/e14-1006
|View full text |Cite
|
Sign up to set email alerts
|

A Hierarchical Bayesian Model for Unsupervised Induction of Script Knowledge

Abstract: Scripts representing common sense knowledge about stereotyped sequences of events have been shown to be a valuable resource for NLP applications. We present a hierarchical Bayesian model for unsupervised learning of script knowledge from crowdsourced descriptions of human activities. Events and constraints on event ordering are induced jointly in one unified framework. We use a statistical model over permutations which captures event ordering constraints in a more flexible way than previous approaches. In orde… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
50
0

Year Published

2014
2014
2019
2019

Publication Types

Select...
8
2

Relationship

2
8

Authors

Journals

citations
Cited by 41 publications
(50 citation statements)
references
References 20 publications
0
50
0
Order By: Relevance
“…1 In this paper, we take the position that the narrative cloze test, which has been treated predom-1 A number of related works on script induction use alternative task formulations and evaluations. (Chambers, 2013;Cheung and Penn, 2013;Frermann et al, 2014;Manshadi et al, 2008;Modi and Titov, 2014;Regneri et al, 2010) inantly as a method for evaluating script knowledge, is more productively thought of simply as a language modeling task. 2 To support this claim, we demonstrate a marked improvement over previous methods on this task using a powerful discriminative language model -the Log-Bilinear model (LBL).…”
Section: Introductionmentioning
confidence: 99%
“…1 In this paper, we take the position that the narrative cloze test, which has been treated predom-1 A number of related works on script induction use alternative task formulations and evaluations. (Chambers, 2013;Cheung and Penn, 2013;Frermann et al, 2014;Manshadi et al, 2008;Modi and Titov, 2014;Regneri et al, 2010) inantly as a method for evaluating script knowledge, is more productively thought of simply as a language modeling task. 2 To support this claim, we demonstrate a marked improvement over previous methods on this task using a powerful discriminative language model -the Log-Bilinear model (LBL).…”
Section: Introductionmentioning
confidence: 99%
“…MSA is the system of Regneri et al (2010). BS is a hierarchical Bayesian model by Frermann et al (2014). BL chooses the order of events based on the preferred order of the corresponding verbs in the training set: (e 1 , e 2 ) is predicted to be in the stereotypical order if the number of times the corresponding verbs v 1 and v 2 appear in this order in the training ESDs exceeds the number of times they appear in the opposite order (not necessary at adjacent positions); a coin is tossed to break ties (or if v 1 and v 2 are the same verb).…”
Section: Resultsmentioning
confidence: 99%
“…Sections 3-4 of this paper focus on a subset of ESDs for 14 scenarios from SMILE and OMICS, with on average 29.9 ESDs per scenario. In RKP, in the follow-up studies by Frermann et al (2014) and Modi and Titov (2014) as well as in the present study, 4 of these scenarios were used as development set and 10 as test set.…”
Section: Datamentioning
confidence: 99%