Background A number of prior studies have demonstrated that research participants with limited English proficiency in the United States are routinely excluded from clinical trial participation. Systematic exclusion through study eligibility criteria that require trial participants to be able to speak, read, and/or understand English affects access to clinical trials and scientific generalizability. We sought to establish the frequency with which English language proficiency is required and, conversely, when non-English languages are affirmatively accommodated in US interventional clinical trials for adult populations. Methods and findings We used the advanced search function on ClinicalTrials.gov specifying interventional studies for adults with at least 1 site in the US. In addition, we used these search criteria to find studies with an available posted protocol. A computer program was written to search for evidence of English or Spanish language requirements, or the posted protocol, when available, was manually read for these language requirements. Of the 14,367 clinical trials registered on ClinicalTrials.gov between 1 January 2019 and 1 December 2020 that met baseline search criteria, 18.98% (95% CI 18.34%–19.62%; n = 2,727) required the ability to read, speak, and/or understand English, and 2.71% (95% CI 2.45%–2.98%; n = 390) specifically mentioned accommodation of translation to another language. The remaining trials in this analysis and the following sub-analyses did not mention English language requirements or accommodation of languages other than English. Of 2,585 federally funded clinical trials, 28.86% (95% CI 27.11%–30.61%; n = 746) required English language proficiency and 4.68% (95% CI 3.87%–5.50%; n = 121) specified accommodation of other languages; of the 5,286 industry-funded trials, 5.30% (95% CI 4.69%–5.90%; n = 280) required English and 0.49% (95% CI 0.30%–0.69%; n = 26) accommodated other languages. Trials related to infectious disease were less likely to specify an English requirement than all registered trials (10.07% versus 18.98%; relative risk [RR] = 0.53; 95% CI 0.44–0.64; p < 0.001). Trials related to COVID-19 were also less likely to specify an English requirement than all registered trials (8.18% versus 18.98%; RR = 0.43; 95% CI 0.33–0.56; p < 0.001). Trials with a posted protocol (n = 366) were more likely than all registered clinical trials to specify an English requirement (36.89% versus 18.98%; RR = 1.94, 95% CI 1.69–2.23; p < 0.001). A separate analysis of studies with posted protocols in 4 therapeutic areas (depression, diabetes, breast cancer, and prostate cancer) demonstrated that clinical trials related to depression were the most likely to require English (52.24%; 95% CI 40.28%–64.20%). One limitation of this study is that the computer program only searched for the terms “English” and “Spanish” and may have missed evidence of other language accommodations. Another limitation is that we did not differentiate between requirements to read English, speak English, understand English, and be a native English speaker; we grouped these requirements together in the category of English language requirements. Conclusions A meaningful percentage of US interventional clinical trials for adults exclude individuals who cannot read, speak, and/or understand English, or are not native English speakers. To advance more inclusive and generalizable research, funders, sponsors, institutions, investigators, institutional review boards, and others should prioritize translating study materials and eliminate language requirements unless justified either scientifically or ethically.
This cross-sectional study examines available forms and posting trends of registered trials as well as the frequency of form posting by funder type for trials initiated since the revised Common Rule was implemented.
COVID-19 has accelerated broad trends already in place toward remote research data collection and monitoring. This move implicates novel ethical and regulatory challenges which have not yet received due attention. Existing work is preliminary and does not seek to identify or grapple with the issues in a rigorous and sophisticated way. Here, we provide a framework for identifying and addressing challenges that we believe can help the research community realize the benefits of remote technologies while preserving ethical ideals and public trust. We organize issues into several distinct categories and provide points to consider in a table that can help facilitate ethical design and review of research studies using remote health instruments.
The development of autonomous artificial intelligence (A-AI) products in health care raises novel regulatory challenges, including how to ensure their safety and efficacy in real-world settings. Supplementing a device-centered regulatory scheme with a regulatory scheme that considers A-AI products as a ‘physician extender’ may improve the real-world monitoring of these technologies and produce other benefits, such as increased access to the services offered by these products. In this article, we review the three approaches to the oversight of nurse practitioners, one type of physician extender, in the USA and extrapolate these approaches to produce a framework for the oversight of A-AI products. Under the framework, the US Food and Drug Administration would evaluate A-AI products and determine whether they are allowed to operate independently of physician oversight; required to operate under some physician oversight via a ‘collaborative protocol’ model; or required to operate under direct physician oversight via a ‘supervisory protocol’ model.
Introduction: Recent revisions to the US Federal Common Rule governing human studies funded or conducted by the federal government require the provision of a "concise and focused" key information (KI) section in informed consent forms (ICFs). We performed a systematic study to characterize KI sections of ICFs for federally funded trials available on ClinicalTrials.gov. Methods: We downloaded ICFs posted on ClinicalTrials.gov for treatment trials initiated on or after the revised Common Rule effective date. Trial records (n = 102) were assessed by intervention type, study phase, recruitment status, and enrollment size. The ICFs and their KI sections, if present, were characterized by page length, word count, readability, topic, and formatting elements. Results: Of the 102 trial records, 76 had identifiable KI sections that were, on average, 10% of the total length of full ICF documents. KI readability grade level was not notably different from other sections of ICFs. Most KI sections were distinguished by section headers and included lists but contained few other formatting elements. Most KI sections included a subset of topics consistent with the basic elements of informed consent specified in the Common Rule. Conclusion: Many of the KI sections in the study sample aligned with practices suggested in the preamble to the revised Common Rule. Further, our results suggest that some KI sections were tailored in study-specific ways. Nevertheless, guidelines on how to write concise and comprehensible KI sections would improve the utility and readability of KI sections.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.