2023
DOI: 10.1177/20539517231206802
|View full text |Cite
|
Sign up to set email alerts
|

Diversity and neocolonialism in Big Data research: Avoiding extractivism while struggling with paternalism

Paula Helm,
Amalia de Götzen,
Luca Cernuzzi
et al.

Abstract: The extractive logic of Big Data-driven technology and knowledge production has raised serious concerns. While most criticism initially focused on the impacts on Western societies, attention is now increasingly turning to the consequences for communities in the Global South. To date, debates have focused on private-sector activities. In this article, we start from the conviction that publicly funded knowledge and technology production must also be scrutinized for their potential neocolonial entanglements. To t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
4
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
4
1

Relationship

1
4

Authors

Journals

citations
Cited by 6 publications
(4 citation statements)
references
References 53 publications
0
4
0
Order By: Relevance
“…Despite this potential, critical commentators on AI language technology point out how well-intended research goals such as "technology-based revitalization" regularly misinterpret the needs of local communities (Bender et al, 2021;Bird, 2022). In most cases, native speakers are not involved in the process, or if they are, they are taking on subordinate roles such as commentator, validator, tester, or worse, data extractor (Helm et al, 2023). Instead of cocreating on an equal footing, in many cases the analytical, high-level work is done in technology labs of Western universities or companies, where the languages being studied are often not even understood by the people working on them, let alone the cultures they represent (Arora, 2016).…”
Section: Ethical Concerns With Biases In Language Technologymentioning
confidence: 99%
See 1 more Smart Citation
“…Despite this potential, critical commentators on AI language technology point out how well-intended research goals such as "technology-based revitalization" regularly misinterpret the needs of local communities (Bender et al, 2021;Bird, 2022). In most cases, native speakers are not involved in the process, or if they are, they are taking on subordinate roles such as commentator, validator, tester, or worse, data extractor (Helm et al, 2023). Instead of cocreating on an equal footing, in many cases the analytical, high-level work is done in technology labs of Western universities or companies, where the languages being studied are often not even understood by the people working on them, let alone the cultures they represent (Arora, 2016).…”
Section: Ethical Concerns With Biases In Language Technologymentioning
confidence: 99%
“…However, many such efforts are based on a vision according to which, with the help of AI, already successfully developed and applied methods and systems that are designed and sought of from an anglo-centric culture of technology development, are one-to-one adopted to other contexts (Bird, 2020;Schwartz, 2022). This approach to bridging the divide leads to a misalignment between the interests and solutions of the former and the living realities of the latter (Helm et al, 2023). Worse, due to general ignorance of the more profound dimensions of linguistic diversity and ultimately the cultural differences that meaningful diversity embodies, major quality problems in the results are neglected, which, as we will show, can result in far-reaching forms of westernized cultural homogenization and epistemic injustice (Spivak, 1988).…”
Section: Introductionmentioning
confidence: 99%
“…There are critical questions geographers and other social scientists are already asking to understand where, when and how AI for Good, and ‘climate AI’ in particular, make sense (Alvarez Leon, 2024; Nost & Colven, 2022). Researchers have shown how AI for Good might displace existing regimes of knowledge and expertise in global development work (McDuie‐Ra & Gulson, 2019), greenwash irresponsible corporate behaviour (Espinoza & Aronczyk, 2021) and how Data for Good can perpetuate paternalistic models of development (Helm et al., 2023). Arguably, these are a result of how AI for Good discourse emphasizes its just ends, rather than its means (as might be true of AI projects in general; see Mattern, 2020).…”
Section: Intersectionsmentioning
confidence: 99%
“…Extractivism understood more broadly is a topic that has drawn increasing attention in the literature on data-sharing. For example, both Rodima-Taylor (2024) and Helm et al (2023) identify extractivist practices and logics in the governance and practices of "big data" technologies and knowledge production. Relatedly, an increasing number of case-studies identify epistemically extractivist datagenerating and data-sharing practices across different academic disciplines.…”
Section: Introductionmentioning
confidence: 99%