This article puts operationalization as a research practice and its theoretical consequences into focus. As all sciences as well as humanities areas use concepts to describe their realm of investigation, digital humanities projects are usually faced with the challenge of ‘bridging the gap’ from theoretical concepts (whose meaning(s) depend on a certain theory and which are used to describe expectations, hypothesis and results) to results derived from data. The process of developing methods to bridge this gap is called ‘operationalization’, and it is a common task for any kind of quantitative, formal, or digital analysis. Furthermore, operationalization choices have long-lasting consequences, as they (obviously) influence the results that can be achieved, and, in turn, the possibilities to interpret these results in terms of the original research question. However, even though this process is so important and so common, its theoretical consequences are rarely reflected. Because the concepts that are operationalized cannot be operationalized in isolation, operationalizing is not only an engineering or implementation challenge, but touches on the theoretical core of the research questions we work on, and the fields we work in. In this article, we first clarify the need to operationalize on selected, representative examples, situate the process within typical DH workflows, and highlight the consequences that operationalization decisions have. We will then argue that operationalization plays such a crucial role for the digital humanities that any kind of theory needs to take off from operationalization practices. Based on these assumptions, we will develop a first scheme of the constraints and necessities of such a theory and reflect their epistemic consequences.
Nietzsche has repeatedly commented on his already published works, and thus continuously reinterpreted them, in order to shape their public reception and to foreground the communication of specific aspects of his works. As such, he followed a specific “work politics,” or Werkpolitik. The resulting retractions are not only revealing for the reconstruction of Nietzsche’s self-understanding, but also demonstrate both the development and the dynamic character of his thinking. In the present article, this is shown through a so-called “contrasting reading,” which contrasts a posthumous note about The Birth of Tragedy, the Attempt at a Self-Criticism from 1886, with the book itself and with the chapter in Ecce Homo that is dedicated to BT. Starting from a close reading of note Nachlass 1888, 17[3], which also takes into account the genesis of BT, I argue that Nietzsche’s self-commentaries combine his current philosophical reflections with work-political objectives. The subsequent comparison reconstructs the philosophical differences between the note and the texts mentioned above, thus demonstrating the dynamic character of Nietzsche’s philosophizing, which is often stated but seldom reconstructed on the basis of the actual texts.
In diesem Beitrag diskutieren wir einen abstrakten workflow zur reflektierten Textanalyse: Ausgehend von einer disziplinären Forschungsfrage werden geeignete Arbeitsschritte und Teilfragen identifiziert, auf deren Basis dann die für sie zentralen Begriffe über Annotationen und Automatisierungen-wie z. B. machine-learning-operationalisiert werden. Die Anwendung dieser Auszeichnungsregeln auf die Korpusdaten führt mehrheitlich zu quantitativen Ergebnissen, die in der Gesamtschau interpretiert werden müssen. In diese Gesamtschau fließen neben den Details der Operationalisierung auch weitere Vorannahmen wie z. B. disziplinäres Domänenwissen ein, deren Konsequenzen für die Interpretation kritisch reflektiert und berücksichtigt werden.
The present article discusses and reflects on possible ways of operationalizing the terminology of traditional literary studies for use in computational literary studies. By »operationalization«, we mean the development of a method for tracing a (theoretical) term back to text-surface phenomena; this is done explicitly and in a rule-based manner, involving a series of substeps. This procedure is presented in detail using as a concrete example Norbert Altenhofer’s »model interpretation« (Modellinterpretation) of Heinrich von Kleist’s The Earthquake in Chile. In the process, we develop a multi-stage operation – reflected upon throughout in terms of its epistemological implications – that is based on a rational-hermeneutic reconstruction of Altenhofer’s interpretation, which focuses on »mysteriousness« (Rätselhaftigkeit), a concept from everyday language. As we go on to demonstrate, when trying to operationalize this term, one encounters numerous difficulties, which is owing to the fact that Altenhofer’s use of it is underspecified in a number of ways. Thus, for instance, and contrary to Altenhofer’s suggestion, Kleist’s sentences containing »relativizing or perspectivizing phrases such as ›it seemed‹ or ›it was as if‹« (Altenhofer 2007, 45) do by no means, when analyzed linguistically, suggest a questioning or challenge of the events narrated, since the unreal quality of those German sentences only relates to the comparison in the subordinate clause, not to the respective main clause. Another indicator central to Altenhofer’s ascription of »mysteriousness« is his concept of a »complete facticity« (lückenlose Faktizität) which »does not seem to leave anything ›open‹« (Altenhofer 2007, 45). Again, the precise designation of what exactly qualifies facticity as »complete« is left open, since Kleist’s novella does indeed select for portrayal certain phenomena and actions within the narrated world (and not others). The degree of factuality in Kleist’s text may be higher than it is in other texts, but it is by no means »complete«. In the context of Altenhofer’s interpretation, »complete facticity« may be taken to mean a narrative mode in which terrible events are reported using conspicuously sober and at times drastic language. Following the critical reconstruction of Altenhofer’s use of terminology, the central terms and their relationship to one another are first explicated (in natural language), which already necessitates intensive conceptual work. We do so implementing a hierarchical understanding of the terms discussed: the definition of one term uses other terms which also need to be defined and operationalized. In accordance with the requirements of computational text analysis, this hierarchy of terms should end in »directly measurable« terms – i. e., in terms that can be clearly identified on the surface of the text. This, however, leads to the question of whether (and, if so, on the basis of which theoretical assumptions) the terminology of literary studies may be traced back in this way to text-surface phenomena. Following the pragmatic as well as the theoretical discussion of this complex of questions, we indicate ways by which such definitions may be converted into manual or automatic recognition. In the case of manual recognition, the paradigm of annotation – as established and methodologically reflected in (computational) linguistics – will be useful, and a well-controlled annotation process will help to further clarify the terms in question. The primary goal, however, is to establish a recognition rule by which individuals may intersubjectively and reliably identify instances of the term in question in a given text. While it is true that in applying this method to literary studies, new challenges arise – such as the question of the validity and reliability of the annotations –, these challenges are at present being researched intensively in the field of computational literary studies, which has resulted in a large and growing body of research to draw on. In terms of computer-aided recognition, we examine, by way of example, two distinct approaches: 1) The kind of operationalization which is guided by precedent definitions and annotation rules benefits from the fact that each of its steps is transparent, may be validated and interpreted, and that existing tools from computational linguistics can be integrated into the process. In the scenario used here, these would be tools for recognizing and assigning character speech, for the resolution of coreference and the assessment of events; all of these, in turn, may be based on either machine learning, prescribed rules or dictionaries. 2) In recent years, so-called end-to-end systems have become popular which, with the help of neural networks, »infer« target terms directly from a numerical representation of the data. These systems achieve superior results in many areas. However, their lack of transparency also raises new questions, especially with regard to the interpretation of results. Finally, we discuss options for quality assurance and draw a first conclusion. Since numerous decisions have to be made in the course of operationalization, and these, in practice, are often pragmatically justified, the question quickly arises as to how »good« a given operationalization actually is. And since the tools borrowed from computational linguistics (especially the so-called inter-annotator agreement) can only partially be transferred to computational literary studies and, moreover, objective standards for the quality of a given implementation will be difficult to find, it ultimately falls to the community of researchers and scholars to decide, based on their research standards, which operationalizations they accept. At the same time, operationalization is the central link between the computer sciences and literary studies, as well as being a necessary component for a large part of the research done in computational literary studies. The advantage of a conscious, deliberate and reflective operationalization practice lies not only in the fact that it can be used to achieve reliable quantitative results (or that a certain lack of reliability at least is a known factor); it also lies in its facilitation of interdisciplinary cooperation: in the course of operationalization, concrete sets of data are discussed, as are the methods for analysing them, which taken together minimizes the risk of misunderstandings, »false friends« and of an unproductive exchange more generally.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.