Handbook of Linguistic Annotation 2017
DOI: 10.1007/978-94-024-0881-2_5
|View full text |Cite
|
Sign up to set email alerts
|

Overview of Annotation Creation: Processes and Tools

Abstract: Creating linguistic annotations requires more than just a reliable annotation scheme. Annotation can be a complex endeavour potentially involving many people, stages, and tools. This chapter outlines the process of creating end-toend linguistic annotations, identifying specific tasks that researchers often perform. Because tool support is so central to achieving high quality, reusable annotations with low cost, the focus is on identifying capabilities that are necessary or useful for annotation tools, as well … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
7
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 12 publications
(7 citation statements)
references
References 12 publications
0
7
0
Order By: Relevance
“…With these, different annotators can perform the same annotation task reaching equivalent (or very similar) results. As shown in Artstein (2017), Finlayson and Erjavec (2017), Hovy and Lavid (2010) and Pustejovsky and Stubbs (2013), where general rules for annotation design are developed, this idea of reliability as reproducibility has become the predominant reliability concept used in any Computational Linguistics (CL) annotation task. Accordingly, guidelines and good practice descriptions for applying IAA in CL annotation tasks have been developed (for example, Lombard et al (2002); Artstein and Poesio (2008); LeBreton and Senter (2008); Kottner et al (2011)).…”
Section: Introductionmentioning
confidence: 99%
“…With these, different annotators can perform the same annotation task reaching equivalent (or very similar) results. As shown in Artstein (2017), Finlayson and Erjavec (2017), Hovy and Lavid (2010) and Pustejovsky and Stubbs (2013), where general rules for annotation design are developed, this idea of reliability as reproducibility has become the predominant reliability concept used in any Computational Linguistics (CL) annotation task. Accordingly, guidelines and good practice descriptions for applying IAA in CL annotation tasks have been developed (for example, Lombard et al (2002); Artstein and Poesio (2008); LeBreton and Senter (2008); Kottner et al (2011)).…”
Section: Introductionmentioning
confidence: 99%
“…We are, therefore, in another dimension of text annotation. However, they remain complex systems and difficult to implement (Finlayson and Erjavec, 2017). Despite these difficulties, the interest in automatic annotation systems by the research community increased significantly, according to the results observed in the applications that were being developed (Cornolti et al, 2013).…”
Section: Annotating Textsmentioning
confidence: 99%
“…It requires a deep understanding of the underlying task and the user's needs and an experience in human-computer interaction methods and approaches. Before working on Ugarit, we studied the related tools and defined their limitations, we also consulted numerous research papers and surveys [34][35][36][37][38] that reviewed and analyzed the existing annotation tools and defined design principles and usability recommendations, which helped us to build a primary vision of the tool. The development of UGARIT has been achieved through the close collaboration of researchers from Computer Science, Digital Humanities, Classical Philology, and Translation Studies, aiming to gain a better understanding of users' needs.…”
Section: Development Processmentioning
confidence: 99%