2020 IEEE International Symposium on Technology and Society (ISTAS) 2020
DOI: 10.1109/istas50296.2020.9462193
|View full text |Cite
|
Sign up to set email alerts
|

AI Development for the Public Interest: From Abstraction Traps to Sociotechnical Risks

Abstract: Despite interest in communicating ethical problems and social contexts within the undergraduate curriculum to advance Public Interest Technology (PIT) goals, interventions at the graduate level remain largely unexplored. This may be due to the conflicting ways through which distinct Artificial Intelligence (AI) research tracks conceive of their interface with social contexts. In this paper we track the historical emergence of sociotechnical inquiry in three distinct subfields of AI research: AI Safety, Fair Ma… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
3
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
1

Relationship

1
4

Authors

Journals

citations
Cited by 7 publications
(3 citation statements)
references
References 43 publications
0
3
0
Order By: Relevance
“…In what follows, we outline the motivating concerns of each subfield and identify key developments in their evolution. For a more complete analysis see [5].…”
Section: Historical Backgroundmentioning
confidence: 99%
See 1 more Smart Citation
“…In what follows, we outline the motivating concerns of each subfield and identify key developments in their evolution. For a more complete analysis see [5].…”
Section: Historical Backgroundmentioning
confidence: 99%
“…Due in part to a gap between the pace and investment in advancing fundamental AI technologies and reflecting on its potential harms, identifying points of engagement between critical theory and various AI domains remains challenging. Furthermore, emerging subdomains such as AI Safety, Fair Machine Learning (Fair ML), and Human-in-the-Loop (HIL) Autonomy lack a common origin, and subsequently hold different conceptualizations about the relationship between their systems and society [5]. While these differences make it challenging to develop interventions with field wide relevance, the distinct lines of inquiry pursued by each subfield present opportunities to research AI's increasingly central role in social relations.…”
Section: Introductionmentioning
confidence: 99%
“…SeeLeveson (2011) for further details.10 The 'solutionism trap' occurs when it is assumed that technical solutions alone can solve complex social and political problems. According toAndrus et al (2021), a first step to avoid the solutionism trap is to maintain a robust culture of questioning which problems should be addressed, and why. A second step would be to examine which properties are not tied to the technical objects under investigation but to their social contexts.…”
mentioning
confidence: 99%