2023
DOI: 10.5465/amr.2021.0159
|View full text |Cite
|
Sign up to set email alerts
|

Reading The Technological Society to Understand the Mechanization of Values and Its Ontological Consequences

Abstract: General rightsCopyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights.• Users may download and print one copy of any publication from the public portal for the purpose of private study or research. • You may not further distribute the material or use it for any profit-making activity or commer… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
18
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
7

Relationship

3
4

Authors

Journals

citations
Cited by 14 publications
(18 citation statements)
references
References 93 publications
0
18
0
Order By: Relevance
“…LLMs resonate, therefore, with an ontological view of the world based on reckoning, or the "the calculative rationality of which present-day computers … are capable" (Smith, 2019, p. 110) by processing data through an accumulation of calculus, computation, and formal rationality (Lindebaum et al, 2020). Thus, when ChatGPT is applied to write a scientific text, or even to theorize, it likely reproduces said ontological viewpoint, a viewpoint that impoverishes theorizing due to a tendency toward ontological monism of constructs (Lindebaum, Moser, Ashraf, & Glaser, 2023), and the ontological straightjacketing of judgment into reckoning. And it does so with a click of the mouse.…”
Section: Provocation  2: Chatgpt Changes the Mode And Speed Of Theor...mentioning
confidence: 99%
See 2 more Smart Citations
“…LLMs resonate, therefore, with an ontological view of the world based on reckoning, or the "the calculative rationality of which present-day computers … are capable" (Smith, 2019, p. 110) by processing data through an accumulation of calculus, computation, and formal rationality (Lindebaum et al, 2020). Thus, when ChatGPT is applied to write a scientific text, or even to theorize, it likely reproduces said ontological viewpoint, a viewpoint that impoverishes theorizing due to a tendency toward ontological monism of constructs (Lindebaum, Moser, Ashraf, & Glaser, 2023), and the ontological straightjacketing of judgment into reckoning. And it does so with a click of the mouse.…”
Section: Provocation  2: Chatgpt Changes the Mode And Speed Of Theor...mentioning
confidence: 99%
“…This is not to say that using ChatGPT for the production of scientific 'technoscripts' may not be problematic for theorizing in the rationalist and empiricist tradition. However, exploring these angles here is outside the space available here (but seeLindebaum, Moser, & Glaser, 2023).4 Further, these models are not inclusive: "over 90% of the world's languages used by more than a billion people currently have little to no support in terms of language technology" (Emily M.Bender et al, 2021, p. 612).…”
mentioning
confidence: 97%
See 1 more Smart Citation
“…Unlike the highprobability choices within the confines of training data that ChatGPT is based upon (Deresiewicz, 2023), we theorize reflexivity as entailing low-probability choices or thinking processes that are new, unexpected and unanticipated, either within or outside the confines of training data. One reason why we conceptualize it this way is because when human beings-with complex goals and needs-reflect using high-probability systems (such as an LLM), there is more opportunity for a misalignment to appear between their objectives and algorithmic outcomes (also see Lindebaum et al, 2022). As opposed to what the human user intended, LLMs tend to echo, for example, 'stereotypical and derogatory associations along gender, race, ethnicity, and disability status' due to biased training data (Bender et al, 2021).…”
Section: Why Does Chatgpt Produce Bad Knowledge?mentioning
confidence: 99%
“…Convivial technologies (Illich and Anne, 1973) are those that are responsibly limited so as to be operated by the users and for their own interests, rather than by and for expert groups or bureaucracies or economic elites, as it is the case for “industrial technologies.” A key idea behind the concept of conviviality is that users’ lives and interests are rarely as mono-dimensional as suggested by those industrial organizations that massively produce and sell, or purchase and impose, technological objects and associated systems (Ellul, 1954, see also Lindebaum et al, 2022). Rather, a technological balance must be sought between narrowly instrumental efficiency and other considerations such as environmental pollution (Klein, 2015), the flourishing of human natural potentials (MacIntyre, 1999), social isolation (Elias, 2001), extreme social polarization and specialization (Braverman, 1974), or “when cancerous acceleration enforces social change at a rate that rules out legal, cultural, and political precedents as formal guidelines to present behavior.” (Illich and Anne, 1973: 5).…”
Section: Conclusion: Reclaiming Post-human Technologies That Work For...mentioning
confidence: 99%