2020
DOI: 10.1080/10510974.2020.1736114
|View full text |Cite
|
Sign up to set email alerts
|

Privacy, Values and Machines: Predicting Opposition to Artificial Intelligence

Abstract: In this study we identify, for the first time, social determinants of opposition to artificial intelligence, based on the assessment of its benefits and risks. Using a national survey in Spain (n= 5,200) and linear regression models, we show that common explanations regarding opposition to artificial intelligence, such as competition and relative vulnerability theories, are not confirmed or have limited explanatory power. Stronger effects are shown by social values and general attitudes to science. Those expre… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
11
0
4

Year Published

2020
2020
2023
2023

Publication Types

Select...
7
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 38 publications
(16 citation statements)
references
References 60 publications
1
11
0
4
Order By: Relevance
“…In other words, a stronger sense of digital self-efficacy over the new technologies leads to less worry over non-conscious data harvesting. This finding agrees with the literature on 'taming the algorithms,' where researchers found self-belief in the ability to exert one's agency in the computational platforms is indicative of active engagement with AI technologies (Lobera et al, 2020;Lu, 2020) Thus, our results demonstrate a solution for the privacy paradox or the personalization-privacy paradox, which is stated in the literature as although most people state strong preferences for the privacy of their personal data, they do not take steps to protect such data and often willing to give them up in the pursuit of personalized benefits (Ameen et al, 2022;Choi et al, 2019;Gerber et al, 2018). The solution is autonomy and control: when people perceive they have a higher sense of autonomy and mastery over the new technologies, they feel less worried about the collection and analysis of their non-conscious emotional data.…”
Section: Privacy and Autonomysupporting
confidence: 90%
“…In other words, a stronger sense of digital self-efficacy over the new technologies leads to less worry over non-conscious data harvesting. This finding agrees with the literature on 'taming the algorithms,' where researchers found self-belief in the ability to exert one's agency in the computational platforms is indicative of active engagement with AI technologies (Lobera et al, 2020;Lu, 2020) Thus, our results demonstrate a solution for the privacy paradox or the personalization-privacy paradox, which is stated in the literature as although most people state strong preferences for the privacy of their personal data, they do not take steps to protect such data and often willing to give them up in the pursuit of personalized benefits (Ameen et al, 2022;Choi et al, 2019;Gerber et al, 2018). The solution is autonomy and control: when people perceive they have a higher sense of autonomy and mastery over the new technologies, they feel less worried about the collection and analysis of their non-conscious emotional data.…”
Section: Privacy and Autonomysupporting
confidence: 90%
“…Policy-wise, we can improve EAI perception in patients through 1) explanation of AI's underlying algorithmic structure even only in high-level abstractions, 2) the current legal and ethical safeguards, 3) what role humans play in the decision-making process. Previous studies have converge into a common feature of human-machine relationship that when users perceive higher level of self-efficacy and having mechanisms to assert meaningful control over the algorithms, they become more comfortable with AI technologies (Lobera et al, 2020;Lu, 2020;McStay, 2020;Mohallick et al, 2018). Future studies can further identify the aspects of AI dependency that patients are averse to, whether it is AI replacement of human workers or AI dominant control in the treatment process.…”
Section: Fear Of Losing Controlmentioning
confidence: 97%
“…Taking into account what has been indicated by various authors, we can find four types of artificial intelligence, among which are (a) recreational machines, which are purely reactive, without the capacity to form memories or to use their experiences to make decisions [7]; (b) limited memory, which can look at the past, allowing the analysis of data developed with anteriority [8]; (c) theory of mind, in which This allows the teacher to rectify and modulate the contents and the proposed tasks. Another benefit of implementing AI in the training of students is that they can benefit from supplementary tutoring by virtual assistants supported by AI [39].…”
Section: Introductionmentioning
confidence: 99%