2017
DOI: 10.3917/lf.195.0017
|View full text |Cite
|
Sign up to set email alerts
|

Analyse, visualisation et identification automatique des chaînes de coréférences : des questions interdépendantes ?

Abstract: Une chaîne de coréférences est une structure qui regroupe un ensemble d'expressions référentielles (ou mentions, ou maillons) désignant toutes la même entité extralinguistique. Chaque maillon peut être enrichi par des annotations linguistiques, de même que les relations reliant certains maillons. En conséquence, il est difficile d'appréhender une telle structure et d'en tirer directement des analyses. Nous présentons des repères méthodologiques importants pour favoriser l'exploitation d'un corpus annoté en cha… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
5
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
3
1
1

Relationship

0
5

Authors

Journals

citations
Cited by 8 publications
(5 citation statements)
references
References 9 publications
0
5
0
Order By: Relevance
“…The discussions about the specification of the coreference task in muc and ace (van Deemter & Kibble 2000) eventually led to proposals for the annotation of anaphoric information (Passonneau 1997, Poesio et al 1999) that were more directly based on the linguistic approach to anaphora discussed in Section 2.1. Most of the corpora developed since have adopted a similar approach (Poesio 2004, Hinrichs et al 2005, Hendrickx et al 2008, Poesio & Artstein 2008, Nedoluzhko et al 2009, Pradhan et al 2012, Ogrodniczuk et al 2015, Landragin 2016, Zeldes 2020. In particular, the creation of OntoNotes (Pradhan et al 2012) and the shared tasks based on OntoNotes and other data sets of this type , Pradhan et al 2012) led to a move away from the modeling of coreference in the sense of muc and ace and toward anaphora resolution as traditionally conceived in linguistics and psychology.…”
Section: Linguistically Motivated Data Setsmentioning
confidence: 99%
“…The discussions about the specification of the coreference task in muc and ace (van Deemter & Kibble 2000) eventually led to proposals for the annotation of anaphoric information (Passonneau 1997, Poesio et al 1999) that were more directly based on the linguistic approach to anaphora discussed in Section 2.1. Most of the corpora developed since have adopted a similar approach (Poesio 2004, Hinrichs et al 2005, Hendrickx et al 2008, Poesio & Artstein 2008, Nedoluzhko et al 2009, Pradhan et al 2012, Ogrodniczuk et al 2015, Landragin 2016, Zeldes 2020. In particular, the creation of OntoNotes (Pradhan et al 2012) and the shared tasks based on OntoNotes and other data sets of this type , Pradhan et al 2012) led to a move away from the modeling of coreference in the sense of muc and ace and toward anaphora resolution as traditionally conceived in linguistics and psychology.…”
Section: Linguistically Motivated Data Setsmentioning
confidence: 99%
“…There are also end-to-end coreference resolution systems for French, such as DeCOFre (Grobol, 2020) and coFR (Wilkens et al, 2020). DeCOFre 7 is trained primarily on spontaneous spoken language (ANCOR corpus, (Muzerelle et al, 2013)), while coFR 8 is trained on both spoken (ANCOR corpus) and written language (Democrat corpus, (Landragin, 2016)). For this study, we use coFR, as it is better suited for our corpus (i.e., archives and documents).…”
Section: Coreference Chainsmentioning
confidence: 99%
“…There are also end-to-end coreference resolution systems for French, such as DeCOFre (Grobol, 2020) and coFR (Wilkens et al, 2020). DeCOFre 7 is trained primarily on spontaneous spoken language (ANCOR corpus, (Muzerelle et al, 2013)), while coFR 8 is trained on both spoken (ANCOR corpus) and written language (Democrat corpus, (Landragin, 2016)). For this study, we use coFR, as it is better suited for our corpus (i.e., archives and documents).…”
Section: Coreference Chainsmentioning
confidence: 99%