2022
DOI: 10.1177/20539517221131290
|View full text |Cite
|
Sign up to set email alerts
|

Algorithmic failure as a humanities methodology: Machine learning's mispredictions identify rich cases for qualitative analysis

Abstract: This commentary tests a methodology proposed by Munk et al. (2022) for using failed predictions in machine learning as a method to identify ambiguous and rich cases for qualitative analysis. Using a dataset describing actions performed by fictional characters interacting with machine vision technologies in 500 artworks, movies, novels and videogames, I trained a simple machine learning algorithm (using the kNN algorithm in R) to predict whether or not an action was active or passive using only information abou… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
6
0
1

Year Published

2023
2023
2024
2024

Publication Types

Select...
6
1

Relationship

1
6

Authors

Journals

citations
Cited by 10 publications
(7 citation statements)
references
References 15 publications
0
6
0
1
Order By: Relevance
“…For instance, drones are most commonly represented as recording, killing, transmitting and targeting and are represented as being controlled by human beings ( Rettberg, forthcoming ). Based on a machine learning analysis of the machine vision situations dataset, Rettberg (2022a) developed the method of ‘algorithmic failure’ to identify particularly salient cases for further study. The data collected on digital games formed the basis for three studies: one on the use of surveillance cameras as an interface in digital games, proposing the term ‘cyborg vision’ to account for the experience of embodied surveillance that these games offer to the player ( Solberg, 2022a ); a second on how holograms mediate between human and non-human actors in games ( Solberg, 2021 ); and a third on enhanced vision in games and its relation to ideas of domination and power ( Solberg, 2022b ).…”
Section: Tracing Agencymentioning
confidence: 99%
See 1 more Smart Citation
“…For instance, drones are most commonly represented as recording, killing, transmitting and targeting and are represented as being controlled by human beings ( Rettberg, forthcoming ). Based on a machine learning analysis of the machine vision situations dataset, Rettberg (2022a) developed the method of ‘algorithmic failure’ to identify particularly salient cases for further study. The data collected on digital games formed the basis for three studies: one on the use of surveillance cameras as an interface in digital games, proposing the term ‘cyborg vision’ to account for the experience of embodied surveillance that these games offer to the player ( Solberg, 2022a ); a second on how holograms mediate between human and non-human actors in games ( Solberg, 2021 ); and a third on enhanced vision in games and its relation to ideas of domination and power ( Solberg, 2022b ).…”
Section: Tracing Agencymentioning
confidence: 99%
“…On the level of the dataset as a whole, the information about each character’s represented gender, race, age, species and sexuality in connection with the assigned verbs can also be used to study racial bias and other biases in how machine vision technologies are imagined across many creative works. The dataset has been deposited in the UiB Open Research Data repository and is available for futher research under a Creative Commons licence ( Rettberg et al , 2022a ).…”
Section: Tracing Agencymentioning
confidence: 99%
“…Mitchell et al, 2021; Obermeyer et al, 2019). However, AI can also be used to study historical biases in society by examining large-scale data sets of historical texts and images to identify the distribution and shifts in the representation of specific societal groups—for example, along gender or racial lines (Jürgens et al, 2022)—or, more generally, identify for further analysis promising cases in which empirical data contradicts model-based expectations (Munk et al, 2022; Rettberg, 2022).…”
Section: Artificial Intelligence and Democracy: The Road Aheadmentioning
confidence: 99%
“…This sort of knowledge-knowledge that might be contained within predictions of most likely output classes, nearest neighbors data, etc-can be made meaningful through the selection of problematic data, in terms of assumed unrepresentative features of a particular class, or overdetermined data, characterized by the dominance of class-specific features, and the construction of datasets featuring antipodal samples, such as negative and positive examples of a particular class. Jill Walker Rettberg takes what she calls "algorithmic failure" as a key method for humanities researchers to use machine learning "against the grain" to investigate false positives from classification texts (Rettberg, 2022). This allows Rettberg to explore assumptions and ambiguities in the categories used by the classifier (i.e., active vs. passive actions) and within datasets.…”
Section: Adversarial Testing and Probingmentioning
confidence: 99%