2016
DOI: 10.1016/j.csl.2015.07.004
|View full text |Cite
|
Sign up to set email alerts
|

Coherent narrative summarization with a cognitive model

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
11
0

Year Published

2016
2016
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 10 publications
(11 citation statements)
references
References 28 publications
0
11
0
Order By: Relevance
“…The application of cognitive theories of reading comprehension in summarization tasks has re-ceived increased attention in the last few years (Zhang et al, 2016;Lloret, 2012). Early work by Fang and Teufel (2014) introduced an effective computational implementation of the KvD theory and proposed an extractive summarizer based on greedy sentence ranking.…”
Section: Related Workmentioning
confidence: 99%
“…The application of cognitive theories of reading comprehension in summarization tasks has re-ceived increased attention in the last few years (Zhang et al, 2016;Lloret, 2012). Early work by Fang and Teufel (2014) introduced an effective computational implementation of the KvD theory and proposed an extractive summarizer based on greedy sentence ranking.…”
Section: Related Workmentioning
confidence: 99%
“…To make text understandable, readers must establish coherent relations between informative parts of text [ZLLG16]. This is significantly important in a summary, where we usually want to condense informative portions, preserving the overall level of text comprehension.…”
Section: How Humans Summarize Textsmentioning
confidence: 99%
“…Additionally, macro-propositions depend on domain-specific schema, whereas our system aims to be domain-independent. Zhang et al (2016) presented a summariser based on a later cognitive model by Kintsch (1998). Instead of modelling importance of propositions directly, their summariser computes the importance of words by spreading activation cyclically, but extracts at proposition level.…”
Section: The Kvd Modelmentioning
confidence: 99%
“…NLI is in the core of natural language understanding and has wide applications in NLP, e.g., question answering (Harabagiu and Hickl, 2006) and automatic summarization (Lacatusu et al, 2006;Yan et al, 2011a;Yan et al, 2011b). Moreover, NLI is also related to other tasks of sentence pair modeling, including paraphrase detection (Hu et al, 2014), relation recognition of discourse units (Liu et al, 2016), etc.…”
Section: Introductionmentioning
confidence: 99%