To analyze which ethically relevant biases have been identified by academic literature in artificial intelligence (AI) algorithms developed either for patient risk prediction and triage, or for contact tracing to deal with the COVID-19 pandemic. Additionally, to specifically investigate whether the role of social determinants of health (SDOH) have been considered in these AI developments or not. We conducted a scoping review of the literature, which covered publications from March 2020 to April 2021. Studies mentioning biases on AI algorithms developed for contact tracing and medical triage or risk prediction regarding COVID-19 were included. From 1054 identified articles, 20 studies were finally included. We propose a typology of biases identified in the literature based on bias, limitations and other ethical issues in both areas of analysis. Results on health disparities and SDOH were classified into five categories: racial disparities, biased data, socio-economic disparities, unequal accessibility and workforce, and information communication. SDOH needs to be considered in the clinical context, where they still seem underestimated. Epidemiological conditions depend on geographic location, so the use of local data in studies to develop international solutions may increase some biases. Gender bias was not specifically addressed in the articles included. The main biases are related to data collection and management. Ethical problems related to privacy, consent, and lack of regulation have been identified in contact tracing while some bias-related health inequalities have been highlighted. There is a need for further research focusing on SDOH and these specific AI apps.
Background Mutations in the genes called BRCA1 and BRCA2 are associated with significantly elevated lifetime risk of developing breast and ovarian cancer. This year marks 25 years since genetic tests for BRCA 1/2 mutations became available to the public. Currently, comprehensive guidelines exist regarding BRCA 1/2 testing and preventive measures in mutation carriers. As such, BRCA1/2 testing represents a precedent not only in genetic testing and management of genetic cancer risk, but also in bioethics. The goal of the current research was to offer a review and an ethical primer of the main ethical challenges related to BRCA testing. Method A systematic scoping review was undertaken following the PRISMA Extension for Scoping Reviews (PRISMA-ScR). Four databases were searched and 18 articles that met the inclusion criteria were synthetized narratively into a conceptual map. Results Ethical discussions revolved around the BRCA 1/2 gene discovery, how tests are distributed for clinical use, the choice to undergo testing, unresolved issues in receiving and disclosing test results, reproductive decision-making, and culture-specific ethics. Several unique properties of the latest developments in testing circumstances (e.g., incorporation of BRCA 1/2 testing in multi-gene or whole genome sequence panels and tests sold directly to consumers) significantly raised the complexity of ethical debates. Conclusions Multidisciplinary ethical discussion is necessary to guide not only individual decision making but also societal practices and medical guidelines in light of the new technologies available and the latest results regarding psychological, social, and health outcomes in cancer previvors and survivors affected by BRCA mutations.
Objective: to analyze which ethically relevant biases have been identified by academic literature in artificial intelligence (AI) algorithms developed either for patient risk prediction and triage, or for contact tracing to deal with the COVID-19 pandemic. Additionally, to specifically investigate whether the role of social determinants of health (SDOH) have been considered in these AI developments or not. Methods: we conducted a scoping review of the literature, which covered publications from March 2020 to April 2021. Studies mentioning biases on AI algorithms developed for contact tracing and medical triage or risk prediction regarding COVID-19 were included.Results: from 1054 identified articles, 20 studies were finally included. We propose a typology of biases identified in the literature based on bias, limitations and other ethical issues in both areas of analysis. Results on health disparities and SDOH were classified into five categories: racial disparities, biased data, socio-economic disparities, unequal accessibility and workforce, and information communication. Discussion: SDOH needs to be considered in the clinical context, where they still seem underestimated. Epidemiological conditions depend on geographic location, so the use of local data in studies to develop international solutions may increase some biases. Gender bias was not specifically addressed in the articles included. Conclusions: the main biases are related to data collection and management. Ethical problems related to privacy, consent, and lack of regulation have been identified in contact tracing while some bias-related health inequalities have been highlighted. There is a need for further research focusing on SDOH and these specific AI apps.
Since 2013, the existing but aging analysis tools that were routinely used for the large hadron collider (LHC) magnet series production measurement campaigns conducted at the CERN superconducting Magnet Test Facility (SM18) are being replaced by a novel open-data and user-driven analysis environment. This effort is running in parallel to the current development of magnet prototypes in the framework of the High Luminosity Upgrade of the CERN LHC (HL-LHC project). This R&D phase requires new features in the quench analysis software to cope with the dedicated or specific magnet tests (splice resistance, inductance, ac loss, Quench Heater efficiency, Hot Spot Temperature assessment, etc. . . ), as well as a more open access to the mathematical routines and to the output results. The new data handling and analysis tools framework that is currently deployed is based on two pillars. First, the availability of the legacy proprietary raw data in an open and widely accessible format. Second, the new possibility for the user to process with the data through dedicated numerical tools and algorithms recently developed for data viewing, analysis, and formatted test result reports. In this paper, the initial analysis framework is described in terms of the acquisition system that produces data, the conversion tool that standardizes the file format, the new analysis tool that replaces the existing quench analysis software, and the database tool that archives the summary of every test. Finally, some statistics of the current situation are presented, based on facts and results of one year of analysis working within the superconducting magnet test facility.
The main aim of this article is to reflect on the impact of biases related to artificial intelligence (AI) systems developed to tackle issues arising from the COVID-19 pandemic, with special focus on those developed for triage and risk prediction. A secondary aim is to review assessment tools that have been developed to prevent biases in AI systems. In addition, we provide a conceptual clarification for some terms related to biases in this particular context. We focus mainly on non-racial biases that may be less considered when addressing biases in AI systems in the existing literature. In the manuscript, we found that the existence of bias in AI systems used for COVID-19 can result in algorithmic justice and that the legal frameworks and strategies developed to prevent the apparition of bias have failed to adequately consider social determinants of health. Finally, we make some recommendations on how to include more diverse professional profiles in order to develop AI systems that increase the epistemic diversity needed to tackle AI biases during the COVID-19 pandemic and beyond.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.