2022
DOI: 10.1007/s00146-021-01331-9
|View full text |Cite
|
Sign up to set email alerts
|

Surveillance, security, and AI as technological acceptance

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
10
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 24 publications
(12 citation statements)
references
References 32 publications
2
10
0
Order By: Relevance
“…A large body of literature investigating AI related risks already exists; showing that perceived risks usually center around the misuse of AI by government or tech companies, loss of privacy, and surveillance (Barth & de Jong, 2017;Park & Jones-Jang, 2022;Zhang & Dafoe, 2019;Zhang & Dafoe, 2020). Kelley et al (2021), for example, conducted a multi-national survey, in Australia, Canada, USA, South Korea, France, Brazil, India, and Nigeria, corroborating results from previous studies by showing that all countries were expressing concern over job loss and privacy.…”
Section: Risks and Opportunities Of Ai Applicationssupporting
confidence: 73%
“…A large body of literature investigating AI related risks already exists; showing that perceived risks usually center around the misuse of AI by government or tech companies, loss of privacy, and surveillance (Barth & de Jong, 2017;Park & Jones-Jang, 2022;Zhang & Dafoe, 2019;Zhang & Dafoe, 2020). Kelley et al (2021), for example, conducted a multi-national survey, in Australia, Canada, USA, South Korea, France, Brazil, India, and Nigeria, corroborating results from previous studies by showing that all countries were expressing concern over job loss and privacy.…”
Section: Risks and Opportunities Of Ai Applicationssupporting
confidence: 73%
“…We are not arguing that providing access to devices or training is unimportant, but that digital inequality and its interrelatedness with social inequalities along lines of class, age, gender, and ethnicity demand that policy, practices, and research address digital inequality in all its complexity. As is already happing in the field of Artificial Intelligence acceptance, see for example the study of Lutz (2019) and Park and Jones‐Jang (2022).…”
Section: Discussionmentioning
confidence: 99%
“…In this vein, we do not have immediate answer, with our dataset, as to precisely how societal mechanism might function to reverse cultivation against misinformation. But we suspect that the general trust that is related to having close community attachment of societal ties, as well as personal-level connections, might be in potential mix, especially when interactive settings are egalitarian and thus afford opportunities to filter in and out diverse interpretations (Lee et al, 2018; Park and Chung, 2017; Park and Jones-Jang, 2022; Uslaner, 2012).…”
Section: Discussionmentioning
confidence: 99%
“…It is worth noting that we are not dismissive about a suggestion that functional features like crosschecking sources can be an effective corrective tool. Yet we are concerned that social media platforms with their institutional gatekeeping power exert exclusive controls over public information at their whim, setting the condition under which misinformation in its sharing and distribution remains “sticky.” As we documented in this work, misinformation, once it sets in motion, will be hard to reverse its influence without persistent institutional effort—for example, to improve algorithm designs that mitigate misinformed traffic (Tandoc et al, 2020; Park and Jones-Jang, 2022). Our finding that misinformation exposure was significantly related to Covid-19 news-following makes it also hard to be optimistic about good will of media institutions, whether increased access to news-information takes a form of social or traditional media consumption.…”
Section: Conclusion and Implicationmentioning
confidence: 94%