“…However, the assessment of the societal impact of a technology in general, and the assessment of AI in particular, is a typical case of the Collingridge (1982): These are developments that are either difficult to predict if they do not exist, or difficult to manage and regulate if they are already ubiquitous. On the one hand, if the technology is sufficiently developed and available, it can be well evaluated, but by then it is often too late to regulate the development.…”
Section: Studies On Human Perception Of Aimentioning
IntroductionArtificial Intelligence (AI) has become ubiquitous in medicine, business, manufacturing and transportation, and is entering our personal lives. Public perceptions of AI are often shaped either by admiration for its benefits and possibilities, or by uncertainties, potential threats and fears about this opaque and perceived as mysterious technology. Understanding the public perception of AI, as well as its requirements and attributions, is essential for responsible research and innovation and enables aligning the development and governance of future AI systems with individual and societal needs.MethodsTo contribute to this understanding, we asked 122 participants in Germany how they perceived 38 statements about artificial intelligence in different contexts (personal, economic, industrial, social, cultural, health). We assessed their personal evaluation and the perceived likelihood of these aspects becoming reality.ResultsWe visualized the responses in a criticality map that allows the identification of issues that require particular attention from research and policy-making. The results show that the perceived evaluation and the perceived expectations differ considerably between the domains. The aspect perceived as most critical is the fear of cybersecurity threats, which is seen as highly likely and least liked.DiscussionThe diversity of users influenced the evaluation: People with lower trust rated the impact of AI as more positive but less likely. Compared to people with higher trust, they consider certain features and consequences of AI to be more desirable, but they think the impact of AI will be smaller. We conclude that AI is still a “black box” for many. Neither the opportunities nor the risks can yet be adequately assessed, which can lead to biased and irrational control beliefs in the public perception of AI. The article concludes with guidelines for promoting AI literacy to facilitate informed decision-making.
“…However, the assessment of the societal impact of a technology in general, and the assessment of AI in particular, is a typical case of the Collingridge (1982): These are developments that are either difficult to predict if they do not exist, or difficult to manage and regulate if they are already ubiquitous. On the one hand, if the technology is sufficiently developed and available, it can be well evaluated, but by then it is often too late to regulate the development.…”
Section: Studies On Human Perception Of Aimentioning
IntroductionArtificial Intelligence (AI) has become ubiquitous in medicine, business, manufacturing and transportation, and is entering our personal lives. Public perceptions of AI are often shaped either by admiration for its benefits and possibilities, or by uncertainties, potential threats and fears about this opaque and perceived as mysterious technology. Understanding the public perception of AI, as well as its requirements and attributions, is essential for responsible research and innovation and enables aligning the development and governance of future AI systems with individual and societal needs.MethodsTo contribute to this understanding, we asked 122 participants in Germany how they perceived 38 statements about artificial intelligence in different contexts (personal, economic, industrial, social, cultural, health). We assessed their personal evaluation and the perceived likelihood of these aspects becoming reality.ResultsWe visualized the responses in a criticality map that allows the identification of issues that require particular attention from research and policy-making. The results show that the perceived evaluation and the perceived expectations differ considerably between the domains. The aspect perceived as most critical is the fear of cybersecurity threats, which is seen as highly likely and least liked.DiscussionThe diversity of users influenced the evaluation: People with lower trust rated the impact of AI as more positive but less likely. Compared to people with higher trust, they consider certain features and consequences of AI to be more desirable, but they think the impact of AI will be smaller. We conclude that AI is still a “black box” for many. Neither the opportunities nor the risks can yet be adequately assessed, which can lead to biased and irrational control beliefs in the public perception of AI. The article concludes with guidelines for promoting AI literacy to facilitate informed decision-making.
“…Policy studies scholars generally, including information policy researchers, emphasize this need for policy makers at all levels of government to be aware of and responsive to multiple values, stakeholders, and pro‐social responsibilities (e.g., Auer, 2006; Bacchi, 2000; Braman, 2006; Nagel, 1990; Overman & Cahill, 1990; Parsons, 1999; Reidenberg, 1997; Rowlands, 1996; Schön & Rein, 1994; Stone, 2002; White, 1994). Considering this large set of values against technological innovation can be particularly challenging, as the scholar David Collingridge (1980) first identified (as quoted in Hageman et al, p. 20, note 103):…”
Section: The Value Of Deliberative Government Actionmentioning
This paper reconsiders the outpacing argument, the belief that changes in law and other means of regulation cannot keep pace with recent changes in technology. We focus on information and communication technologies (ICTs) in and of themselves as well as applied in computer science, telecommunications, health, finance, and other applications, but our argument applies also in rapidly developing technological fields such as environmental science, materials science, and genetic engineering. First, we discuss why the outpacing argument is so closely associated with information and computing technologies. We then outline 12 arguments that support the outpacing argument, by pointing to some particular weaknesses of policy making, using the United States as the primary example. Then arguing in the opposite direction, we present 4 brief and 3 more extended criticisms of the outpacing thesis. The paper's final section responds to calls within the technical community for greater engagement of policy and ethical concerns and reviews the paper's major arguments. While the paper focuses on ICTs and policy making in the United States, our critique of the outpacing argument and our exploration of its complex character are of utility to actors in other political contexts and in other technical fields.
“…And, fourth, network and coordination effects mean that advantages accrue to agents adopting the same technology as others. A good example of such lock-in is provided in Collingridge's (1982) discussion of roads and automobiles, which shows how commitment to certain technologies can, in the longer-term, create lock-in as communities of practice and new technologies build up the installed base.…”
Section: Technological Innovation In Border Controlmentioning
The UK government’s e-Borders project presents an intriguing anomaly: despite repeated and acknowledged failings of the project over two decades, it has remained a core part of border strategy across successive administrations. This article seeks to explain the surprising resilience of this programme by developing the concept of political lock-in. We combine insights from critical security studies with science and technology studies concepts of ‘tech hype’ and lock-in. We apply these insights to trace how e-Borders was constructed as a compelling technological solution to pressing security issues. This created a form of political lock-in, whereby the project became impossible to abandon because of its political urgency, despite increasing awareness of its unfeasibility. With the project caught in a liminal state of non-completion, successive governments expanded the scope of the programme by attaching new security problems to it, thereby rendering it even more unviable. Our analysis thus throws up a paradox: rather than mobilizing resources to accomplish its tech vision, securitization created forms of lock-in and paralysis that made the programme more difficult to accomplish.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.