Background:The standard Fast Healthcare Interoperability Resources (FHIR) is widely used in health information technology. However, its use as a standard for health research is still less prevalent. To use existing data sources more efficiently for health research, data interoperability becomes increasingly important. FHIR provides solutions by offering resource domains such as "Public Health & Research" and "Evidence-Based Medicine" while using already established web technologies. Therefore, FHIR could help standardize data across different data sources and improve interoperability in health research. Objective:The aim of our study was to provide a systematic review of existing literature and determine the current state of FHIR implementations in health research and possible future directions. Methods:We searched the PubMed/MEDLINE, Embase, Web of Science, IEEE Xplore, and Cochrane Library databases for studies published from 2011 to 2022. Studies investigating the use of FHIR in health research were included. Articles published before 2011, abstracts, reviews, editorials, and expert opinions were excluded. We followed the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines and registered this study with PROSPERO (CRD42021235393). Data synthesis was done in tables and figures. Results:We identified a total of 998 studies, of which 49 studies were eligible for inclusion. Of the 49 studies, most (73%, n=36) covered the domain of clinical research, whereas the remaining studies focused on public health or epidemiology (6%, n=3) or did not specify their research domain (20%, n=10). Studies used FHIR for data capture (29%, n=14), standardization of data (41%, n=20), analysis (12%, n=6), recruitment (14%, n=7), and consent management (4%, n=2). Most (55%, 27/49) of the studies had a generic approach, and 55% (12/22) of the studies focusing on specific medical specialties (infectious disease, genomics, oncology, environmental health, imaging, and pulmonary hypertension) reported their solutions to be conferrable to other use cases. Most (63%, 31/49) of the studies reported using additional data models or terminologies: Systematized Nomenclature of Medicine Clinical Terms (29%, n=14), Logical Observation Identifiers Names and Codes (37%, n=18), International Classification of Diseases 10th Revision (18%, n=9), Observational Medical Outcomes Partnership common data model (12%, n=6), and others (43%, n=21). Only 4 (8%) studies used a FHIR resource from the domain "Public Health & Research." Limitations using FHIR included the possible change in the content of FHIR resources, safety, legal matters, and the need for a FHIR server. Conclusions:Our review found that FHIR can be implemented in health research, and the areas of application are broad and generalizable in most use cases. The implementation of international terminologies was common, and other standards such as the Observational Medical Outcomes Partnership common data model could be used as a complement to FHIR. Limitations such...
Background Resources are increasingly spent on artificial intelligence (AI) solutions for medical applications aiming to improve diagnosis, treatment, and prevention of diseases. While the need for transparency and reduction of bias in data and algorithm development has been addressed in past studies, little is known about the knowledge and perception of bias among AI developers. Objective This study’s objective was to survey AI specialists in health care to investigate developers’ perceptions of bias in AI algorithms for health care applications and their awareness and use of preventative measures. Methods A web-based survey was provided in both German and English language, comprising a maximum of 41 questions using branching logic within the REDCap web application. Only the results of participants with experience in the field of medical AI applications and complete questionnaires were included for analysis. Demographic data, technical expertise, and perceptions of fairness, as well as knowledge of biases in AI, were analyzed, and variations among gender, age, and work environment were assessed. Results A total of 151 AI specialists completed the web-based survey. The median age was 30 (IQR 26-39) years, and 67% (101/151) of respondents were male. One-third rated their AI development projects as fair (47/151, 31%) or moderately fair (51/151, 34%), 12% (18/151) reported their AI to be barely fair, and 1% (2/151) not fair at all. One participant identifying as diverse rated AI developments as barely fair, and among the 2 undefined gender participants, AI developments were rated as barely fair or moderately fair, respectively. Reasons for biases selected by respondents were lack of fair data (90/132, 68%), guidelines or recommendations (65/132, 49%), or knowledge (60/132, 45%). Half of the respondents worked with image data (83/151, 55%) from 1 center only (76/151, 50%), and 35% (53/151) worked with national data exclusively. Conclusions This study shows that the perception of biases in AI overall is moderately fair. Gender minorities did not once rate their AI development as fair or very fair. Therefore, further studies need to focus on minorities and women and their perceptions of AI. The results highlight the need to strengthen knowledge about bias in AI and provide guidelines on preventing biases in AI health care applications.
BACKGROUND Resources are increasingly spent on artificial intelligence (AI) solutions for medical applications aiming to improve diagnosis, treatment and prevention of diseases. While the need for transparency and reduction of bias in data and algorithm development was addressed in past studies, little is known on active measures undertaken within current AI developments. OBJECTIVE This study’s objective was to survey AI specialists in healthcare to investigate developers’ perception of bias in AI algorithms for healthcare applications. METHODS An online survey was provided in both German and English language comprising a maximum of 41 questions using branching logic within the REDCap® web application. Only results of participants with experience in the field of medical AI applications were included for analysis. Demographic data, technical expertise and perception of fairness as well as knowledge in biases in AI were analyzed and variations among gender, age and work environment assessed. RESULTS A total of 151 AI specialists completed the online survey. The median age was 30 years (IQR 26-39) and 67% of respondents were male. Five percent never heard of biases in AI before, one third rated their development as fair (31%, 47/151) or moderately fair (34%, 51/151). Twelve percent (18/151) reported their AI to be barely fair and 1% (2/151) not fair at all. Reasons for biases were lack of fair data (68%, 90/132), guidelines, recommendations (49%, 65/132) or knowledge (45%, 60/132). We found a significant difference among participants regarding bias perception and work environment (p=0.020): 5% of respondents working in industry compared to 25% of respondents working clinically rated their AI developments as not fair at all or barely fair. CONCLUSIONS This study highlights that knowledge and guidelines regarding preventive measures for biases as well as generating fair data with the help of the FAIR principles must be further perpetuated for establishing fair AI healthcare applications. The difference of fairness perception among AI developers from industry and clinical environment needs to be further investigated.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.