Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing 2017
DOI: 10.18653/v1/d17-1200
|View full text |Cite
|
Sign up to set email alerts
|

Inducing Semantic Micro-Clusters from Deep Multi-View Representations of Novels

Abstract: Automatically understanding the plot of novels is important both for informing literary scholarship and applications such as summarization or recommendation. Various models have addressed this task, but their evaluation has remained largely intrinsic and qualitative. Here, we propose a principled and scalable framework leveraging expert-provided semantic tags (e.g., mystery, pirates) to evaluate plot representations in an extrinsic fashion, assessing their ability to produce locally coherent groupings of novel… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
11
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
6
2
1

Relationship

2
7

Authors

Journals

citations
Cited by 12 publications
(11 citation statements)
references
References 16 publications
0
11
0
Order By: Relevance
“…However, although we are the first to look at the QA task, learning to understand books through other modeling objectives has become an important subproblem in NLP. These include high level plot understanding through clustering of novels (Frermann and Szarvas, 2017) or summarization of movie scripts (Gorinski and Lapata, 2015), to more fine grained processing by inducing character types (Bamman et al, 2014b;Bamman et al, 2014a), understanding relationships between characters (Iyyer et al, 2016;Chaturvedi et al, 2017), or understanding plans, goals, and narrative structure in terms of abstract narratives (Schank and Abelson, 1977;Wilensky, 1978;Black and Wilensky, 1979;Chambers and Jurafsky, 2009). In computer vision, the MovieQA dataset (Tapaswi et al, 2016) fulfills a similar role as Nar-rativeQA.…”
Section: Related Workmentioning
confidence: 99%
“…However, although we are the first to look at the QA task, learning to understand books through other modeling objectives has become an important subproblem in NLP. These include high level plot understanding through clustering of novels (Frermann and Szarvas, 2017) or summarization of movie scripts (Gorinski and Lapata, 2015), to more fine grained processing by inducing character types (Bamman et al, 2014b;Bamman et al, 2014a), understanding relationships between characters (Iyyer et al, 2016;Chaturvedi et al, 2017), or understanding plans, goals, and narrative structure in terms of abstract narratives (Schank and Abelson, 1977;Wilensky, 1978;Black and Wilensky, 1979;Chambers and Jurafsky, 2009). In computer vision, the MovieQA dataset (Tapaswi et al, 2016) fulfills a similar role as Nar-rativeQA.…”
Section: Related Workmentioning
confidence: 99%
“…Books and questions are preprocessed in advance using the book-nlp parser (Bamman et al, 2014), a system for character detection and shallow parsing in books (Iyyer et al, 2016;Frermann and Szarvas, 2017) which provides, among others: sentence segmentation, POS tagging, dependency parsing, named entity recognition, and coreference resolution. The parser identifies and clusters character mentions, so that all coreferent (direct or pronominal) character mentions are associated with the same unique character identifier.…”
Section: Book and Question Preprocessingmentioning
confidence: 99%
“…Most relevant to our work is Iyyer et al (2016), which suggests RMN better capture dynamic relationships in literature than hidden Markov model (Gruber et al, 2007) and LDA (Blei et al, 2003). Recent work extended and applied RMN to other settings such as studying user roles in online communities (Wang et al, 2016;Frermann and Szarvas, 2017). Notably, Chaturvedi et al (2017) suggests HMM with shallow linguistic features (i.e., frame net parses) and global constraints can outperform RMN for modeling relations in literature.…”
Section: Related Workmentioning
confidence: 99%