2021
DOI: 10.5210/dad.2021.201
|View full text |Cite
|
Sign up to set email alerts
|

Discourse Relations and Connectives in Higher Text Structure

Abstract: The present article investigates possibilities and limits of local (shallow) analysis of discourse coherence with respect to the phenomena of global coherence and higher composition of texts. We study corpora annotated with local discourse relations in Czech and partly in English to try and find clues in the local annotation indicating a higher discourse structure. First, we classify patterns of subsequent or overlapping pairs of local relations, and hierarchies formed by nested local relations. Special attent… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
1

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(1 citation statement)
references
References 20 publications
0
1
0
Order By: Relevance
“…Resources for implicit discourse relations are scarce compared to the explicit ones, since they are harder to annotate (Miltsakaki et al, 2004). For example, among corpora annotated with discourse relations such as Arabic (Al-Saif and Markert, 2010), Czech (Poláková et al, 2013), Chinese (Zhou and Xue, 2015), English (Prasad et al, 2008), Hindi (Oza et al, 2009), and Turkish (Zeyrek et al, 2013), only the Chinese, English and Hindi corpora include implicit discourse relations (Prasad et al, 2014). In this low-resource scenario, Ji et al (2015) proposed training with explicit relations via unsupervised domain adaptation, viewing explicit relations as a source domain with labeled training data, and implicit relations as a target domain with no labeled data.…”
Section: Introductionmentioning
confidence: 99%
“…Resources for implicit discourse relations are scarce compared to the explicit ones, since they are harder to annotate (Miltsakaki et al, 2004). For example, among corpora annotated with discourse relations such as Arabic (Al-Saif and Markert, 2010), Czech (Poláková et al, 2013), Chinese (Zhou and Xue, 2015), English (Prasad et al, 2008), Hindi (Oza et al, 2009), and Turkish (Zeyrek et al, 2013), only the Chinese, English and Hindi corpora include implicit discourse relations (Prasad et al, 2014). In this low-resource scenario, Ji et al (2015) proposed training with explicit relations via unsupervised domain adaptation, viewing explicit relations as a source domain with labeled training data, and implicit relations as a target domain with no labeled data.…”
Section: Introductionmentioning
confidence: 99%