The rush to understand new socio-economic contexts created by the wide adoption of AI is justified by its farranging consequences, spanning almost every walk of life. Yet, the public sector's predicament is a tragic double bind: its obligations to protect citizens from potential algorithmic harms are at odds with the temptation to increase its own efficiency -or in other words -to govern algorithms, while governing by algorithms. Whether such dual role is even possible, has been a matter of debate, the challenge stemming from algorithms' intrinsic properties, that make them distinct from other digital solutions, long embraced by the governments, create externalities that rule-based programming lacks. As the pressures to deploy automated decision making systems in the public sector become prevalent, this paper aims to examine how the use of AI in the public sector in relation to existing data governance regimes and national regulatory practices can be intensifying existing power asymmetries. To this end, investigating the legal and policy instruments asssociated with the use of AI for strenghtening the immigration process control system in Canada; "optimising" the employment services" in Poland, and personalising the digital service experience in Finland, the paper advocates for the need of a common framework to evaluate the potential impact of the use of AI in the public sector. In this regard, it discusses the specific effects of automated decision support systems on public services and the growing expectations for governments to play a more prevalent role in the digital society and to ensure that the potential of technology is harnessed, while negative effects are controlled and possibly avoided. This is of particular importance in light of the current COVID-19 emergency crisis where AI and the underpinning regulatory framework of data ecosystems, have become crucial policy issues as more and more innovations are based on large scale data collections from digital devices, and the real-time accessibility of information and services, contact and relationships between institutions and citizens could strengthen -or undermine -trust in governance systems and democracy.
Potential regulation of use of artificial intelligence by business should minimize the risks for consumers and the society without impeding the possible benefits. To do so, we argue, the legal reaction should be grounded in an empirical analysis and proceed case-by-case, bottom-up, as a series of responses to concrete research questions. The ambition of this report has been to commence and facilitate that process. We extensively document and evaluate the market practice of the corporate use of AI, map the scholarly debates about (consumer) law and artificial intelligence, and present a list of twenty five research questions which, in our opinion, require attention of regulators and academia. The report is divided into four sections. The first explains our understanding of the concepts of "artificial intelligence" (a set of socio-technological practices enabled by machine learning and big data) and "consumer law" (various legal instruments concretizing the principles of the weaker party protection, non-discrimination, regulated autonomy and consumer privacy). The second section documents the ways in which the business uses artificial intelligence in seven sectors of the economy: finance and insurance, information services, energy and "smart solutions", retail, autonomous vehicles, healthcare and legal services. For each analyzed sector we study the gains for the businesses stemming from the deployment of AI, the potential gains, but also challenges for consumers, as well as third party effects. In the third section, we repeat the analysis through the lens of four general "uses" of AI by businesses in various sectors: knowledge generation, automated decision making, advertising and other commercial practices and personal digital assistants. Finally, in the fourth section, we present the questions which we believe should be addressed in the next stage of the research. We cluster them into: normative questions about regulatory goals, technological and governance questions about regulatory means, and theoretical questions about concepts and preconceptions.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.