Proceedings of the 55th Annual Meeting of the Association For Computational Linguistics (Volume 2: Short Papers) 2017
DOI: 10.18653/v1/p17-2097
|View full text |Cite
|
Sign up to set email alerts
|

Pay Attention to the Ending:Strong Neural Baselines for the ROC Story Cloze Task

Abstract: We consider the ROC story cloze task (Mostafazadeh et al., 2016) and present several findings. We develop a model that uses hierarchical recurrent networks with attention to encode the sentences in the story and score candidate endings. By discarding the large training set and only training on the validation set, we achieve an accuracy of 74.7%. Even when we discard the story plots (sentences before the ending) and only train to choose the better of two endings, we can still reach 72.5%. We then analyze this "… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

3
62
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 56 publications
(65 citation statements)
references
References 17 publications
3
62
0
Order By: Relevance
“…In ROC Stories (Mostafazadeh et al, 2016), a story cloze dataset, Schwartz et al (2017b) obtained a high performance by only considering the candidate endings, without even looking at the story context. In this case, stylistic features of the candidate endings alone, such as the length or certain words, were strong indicators of the correct ending (Schwartz et al, 2017a;Cai et al, 2017). A similar phenomenon was observed in reading comprehension, where systems performed non-trivially well by using only the final sentence in the passage or ignoring the passage altogether (Kaushik & Lipton, 2018).…”
Section: Fine-tuning On Target Datasetsmentioning
confidence: 52%
“…In ROC Stories (Mostafazadeh et al, 2016), a story cloze dataset, Schwartz et al (2017b) obtained a high performance by only considering the candidate endings, without even looking at the story context. In this case, stylistic features of the candidate endings alone, such as the length or certain words, were strong indicators of the correct ending (Schwartz et al, 2017a;Cai et al, 2017). A similar phenomenon was observed in reading comprehension, where systems performed non-trivially well by using only the final sentence in the passage or ignoring the passage altogether (Kaushik & Lipton, 2018).…”
Section: Fine-tuning On Target Datasetsmentioning
confidence: 52%
“…A: I was getting tired of having food in the house. tion of the problem (Gururangan et al, 2018;Cai et al, 2017). Therefore, we report problem ablation study in Table 4 using BERT-FT as a simple but powerful straw man approach.…”
Section: Results and Analysismentioning
confidence: 99%
“…To improve the performance, fea-tures like topic words and sentiment score are also extracted and incorporated (Chaturvedi, Peng, and Roth 2017). Neural network models have also been applied to this task (e.g., (Huang et al 2013) and (Cai, Tu, and Gimpel 2017)), which use LSTM to encode different parts of the story and calculate their similarities. In addition, (Li et al 2018) introduces event frame to their model and leverages five different embeddings.…”
Section: Related Workmentioning
confidence: 99%
“…We use the following models as our baselines: 2 http://cs.rochester.edu/nlp/rocstories Model Accuracy(%) Msap (Schwartz et al 2017) 75.2 HCM (Chaturvedi, Peng, and Roth 2017) 77.6 DSSM (Huang et al 2013) 58.5 Cai (Cai, Tu, and Gimpel 2017) 74.7 SeqMANN (Li et al 2018) 84.7 FTLM (Radford et al 2018 86.5 Our Model(Plot&End) 78.4 Our Model(Full Story) 87.6* Table 2: Performance comparison with baselines, *indicates that the model is significantly better than best baseline model…”
Section: Experiments Baselinesmentioning
confidence: 99%
See 1 more Smart Citation