Interpretation is widely regarded as the core activity of literary studies. Still, the appropriate balance between the plurality and the limitation of possible interpretations is a non-trivial issue. Whereas it is sensible to accept that literary texts can generally have various meanings, it should not be possible to attribute any kind of meaning to a text. Therefore, while interpreters must be allowed to disagree in their analyses, it must at the same time be possible to review whether a disagreement is actually based on adequate reasons like, for example, textual ambiguity or polyvalence. In this paper, we propose a best practice model as one effective means to review disagreement in accordance with literary studies principles. The model has been developed during the collaborative, computer-assisted annotation of literary texts in a project in which short stories have been analyzed narratologically. The examination of inconsistently annotated text passages revealed four types of reasons for disagreement: misinterpretations, deficient definitions of the categories of analysis, dependencies of the relevant categories on preliminary analyses, and textual ambiguity/polyvalence. We argue that only disagreements based on textual ambiguity are considered legitimate or valuable cases of disagreement, whereas the other three types of disagreement can be resolved in a systematic way.
Abstract.A system for recognition and morphological classification of unknown German nouns is described. It takes raw texts in German as input and outputs a list of the unknown nouns together with hypotheses about their stem and morphological class. The system exploits both global and local information as well as morphological properties and external linguistic knowledge. It acquires and applies Mikheev-like ending-guessing rules, which were originally proposed for POS guessing. This paper presents the system design and implementation and discusses its performance by extensive evaluation.
This volume is the first of two, and it documents activities that we have been conducting in the past years. They are best described as "organizing shared tasks with/in/for the digital humanities" and have evolved significantly since we started.
This dataset covers 41,341 manual event annotations of six German prose texts from the 19th and early 20th century comprising 290,997 tokens. For each text, the dataset includes annotations by two annotators and gold standard annotations. These annotations were used for the automation of narratological event annotations (Vauth, Hatzel, Gius, & Biemann, 2021), a reflection of inter annotator agreements in literary studies and the development of an event based plot model (Gius & Vauth, accepted).
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.