CA 2019
DOI: 10.22148/16.047
|View full text |Cite
|
Sign up to set email alerts
|

Foreword to the Special Issue “A Shared Task for the Digital Humanities: Annotating Narrative Levels”

Abstract: This volume is the first of two, and it documents activities that we have been conducting in the past years. They are best described as "organizing shared tasks with/in/for the digital humanities" and have evolved significantly since we started.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
2
0
1

Year Published

2020
2020
2022
2022

Publication Types

Select...
1
1
1
1

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(3 citation statements)
references
References 0 publications
0
2
0
1
Order By: Relevance
“…Once they had completed their annotations, whenever they were in agreement, that label was used as the label on that edge. Whenever they were in disagreement, an equally qualified arbitrator decided on which label to use as the label for that edge [97][98] [99]. We did not ask the annotators to add additional nodes or edges, although both of them decided independently to annotate the Edgar Welch episode described in the article, adding two additional nodes: "Edgar Welch" and "Police".…”
Section: Discussionmentioning
confidence: 99%
“…Once they had completed their annotations, whenever they were in agreement, that label was used as the label on that edge. Whenever they were in disagreement, an equally qualified arbitrator decided on which label to use as the label for that edge [97][98] [99]. We did not ask the annotators to add additional nodes or edges, although both of them decided independently to annotate the Edgar Welch episode described in the article, adding two additional nodes: "Edgar Welch" and "Police".…”
Section: Discussionmentioning
confidence: 99%
“…In the computational linguistics community, the most relevant of these setups is PAN (Plagiarism Analysis, Authorship Identification, and Near-Duplicate Detection), 14 a competition held each year in the context of the Conference and Labs of the Evaluation Forum (CLEF) conference, where teams of programmers are invited to write scripts to solve a series of shared tasks, generally focused on authorship attribution. This 'shared task' can be considered in general as a good research practice, as it implies a criticism of tools that are constantly rethought and improved through competition [Gius et al 2019].…”
Section: Word Frequencies In Stylometrymentioning
confidence: 99%
“…Beispiele für Annotationsrichtlinien zum literarischen Phänomen der Erzählebene finden sich in der Cultural-Analytics-Sonderausgabe zum SANTA shared task (Gius, Reiter et al 2019). Mit Barth (2020) und Ketschik, Murr et al (2020) befinden sich zwei von ihnen auch in diesem Band ab Seite 423.…”
Section: Beispiele Für Annotationsrichtlinienunclassified