Governments and citizens of nearly every nation have been compelled to respond to COVID-19. Many measures have been adopted, including contact tracing and risk assessment algorithms, whereby citizen whereabouts are monitored to trace contact with other infectious individuals in order to generate a risk status via algorithmic evaluation. Based on 38 in-depth interviews, we investigate how people make sense of Health Code ( jiankangma), the Chinese contact tracing and risk assessment algorithmic sociotechnical assemblage. We probe how people accept or resist Health Code by examining their ongoing, dynamic, and relational interactions with it. Participants display a rich variety of attitudes toward privacy and surveillance, ranging from fatalism to the possibility of privacy to trade-offs for surveillance in exchange for public health, which is mediated by the perceived effectiveness of Health Code and changing views on the intentions of institutions who deploy it. We show how perceived competency varies not just on how well the technology works, but on the social and cultural enforcement of various non-technical aspects like quarantine, citizen data inputs, and cell reception. Furthermore, we illustrate how perceptions of Health Code are nested in people’s broader interpretations of disease control at the national and global level, and unexpectedly strengthen the Chinese authority’s legitimacy. None of the Chinese public, Health Code, or people’s perceptions toward Health Code are predetermined, fixed, or categorically consistent, but are co-constitutive and dynamic over time. We conclude with a theorization of a relational perception and methodological reflections to study algorithmic sociotechnical assemblages beyond COVID-19.
Background: The increased use of electronic health records (EHRs) has resulted in new opportunities for research, but also raises concerns regarding privacy, confidentiality, and patient awareness. Because public trust is essential to the success of the research enterprise, patient perspectives are essential to the development and implementation of ethical approaches to the research use of EHRs. Yet, little is known about patients' views and expectations regarding various approaches to seeking permission for research use of their EHR data. Methods: We conducted semi-structured interviews with 120 patients in four counties in diverse regions of the southeastern United States: Appalachia, the Mississippi Delta, and the Piedmont area of North Carolina. We asked participants to consider, from multiple stakeholder perspectives, the advantages and disadvantages of three approaches to notifying patients of, or obtaining permission for, research use of their EHR data; whether they believed it would be acceptable if their healthcare organization used each approach; and which approach would be most appropriate. Results: Nearly all participants said General Notification, Broad Permission, and Categorical Permission would each be acceptable approaches to notification of, or permission for, EHR research. Over half identified Broad Permission as the most appropriate approach. Across all of these discussions, major themes included the importance of clarity, simplicity, and usability of patient-facing materials, as well as the level of transparency, trustworthiness, and respect for patients the approach conveys. Conclusions: Our findings help to inform the development and implementation of ethical approaches to the research use of EHRs by identifying key patient considerations regarding various approaches to permission and suggesting potential actions for healthcare organizations and researchers.
Governments, institutions, and citizens of nearly every nation have been compelled to respond to COVID-19. Many measures have been adopted, including contact tracing and risk assessment, whereby citizen whereabouts are constantly monitored to trace contact with other infectious individuals and isolate contagious parties via algorithmic evaluation of their risk status. This paper investigates how citizens make sense of Health Code (jiankangma), the contact tracing and risk assessment algorithm in China. We probe how people accept or resist the algorithm by examining their ongoing, dynamic, and relational interactions with it over time. By seeking a deeper, iterative understanding of how individuals accept or resist the algorithm, our data unearths three key sites of concern. First, how understandings of algorithmic surveillance shape and are shaped by notions of privacy, including fatalism towards the possibility of true privacy in China and a trade-off narrative between privacy and twin imperatives of public and economic health. Second, how trust in the algorithm is mediated by the perceived competency of the technology, the veracity of input data, and well-publicized failures in both data collection and analysis. Third, how the implementation of Health Code in social life alters beliefs about the algorithm, such as its further role after COVID-19 passes, or contradictory and disorganized enforcement measures upon risk assessment. Chinese citizens make sense of Health Code in a relational fashion, whereby users respond very differently to the same sociotechnical assemblage based upon social and individual factors.
Artificial general intelligence is a greatly anticipated technology with non-trivial existential risks, defined as machine intelligence with competence as great/greater than humans. To date, social scientists have dedicated little effort to the ethics of AGI or AGI researchers. This paper employs inductive discourse analysis of the academic literature of two intellectual groups writing on the ethics of AGI—applied and/or ‘basic’ scientific disciplines henceforth referred to as technicians (e.g., computer science, electrical engineering, physics), and philosophy-adjacent disciplines henceforth referred to as PADs (e.g., philosophy, theology, anthropology). These groups agree that AGI ethics is fundamentally about mitigating existential risk. They highlight our moral obligation to future generations, demonstrate the ethical importance of better understanding consciousness, and endorse a hybrid of deontological/utilitarian normative ethics. Technicians favor technocratic AGI governance, embrace the project of ‘solving’ moral realism, and are more deontologically inclined than PADs. PADs support a democratic approach to AGI governance, are more skeptical of deontology, consider current AGI predictions as fundamentally imprecise, and are wary of using AGI for moral fact-finding.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.