DeepCOVID-XR, an artificial intelligence algorithm for detecting COVID-19 on chest radiographs, demonstrated performance similar to the consensus of experienced thoracic radiologists. Key Results: • DeepCOVID-XR classified 2,214 test images (1,194 COVID-19 positive) with an accuracy of 83% and AUC of 0.90 compared with the reference standard of RT-PCR. • On 300 random test images (134 COVID-19 positive), DeepCOVID-XR's accuracy was 82% (AUC 0.88) compared to 5 individual thoracic radiologists (accuracy 76%-81%) and the consensus of all 5 radiologists (accuracy 81%, AUC 0.85). • Using the consensus interpretation of the radiologists as the reference standard, DeepCOVID-XR's AUC was 0.95. Abbreviations: Coronavirus Disease 2019 (COVID-19), real time polymerase chain reaction (RT-PCR), artificial intelligence (AI), area under the curve (AUC), receiver operating characteristic (ROC), convolutional neural network (CNN) See also the editorial by van Ginneken.
We report here our findings from adolescent and young adult females (ages 14-25) with a family history of fragile X syndrome regarding their perceptions of the optimal ages for 1) learning fragile X is inherited, 2) learning one could be a carrier for fragile X, and 3) offering carrier testing for fragile X. Three groups were enrolled: those who knew they were carriers or noncarriers and those who knew only they were at-risk to be a carrier. Only two of the 53 participants felt that offering carrier testing should be delayed until the age of 18 years. Participants who knew only that they were atrisk to be a carrier provided older optimal ages for offering carrier testing than those who knew their actual carrier status. Participants did not express regret or negative emotions about the timing of the disclosure of genetic risk information regarding their own experiences. Participants' reasoning behind reported ages for informing about genetic risk and offering carrier testing varied depending on what type of information was being disclosed, which carrier status group the participant belonged to, and the preferred age for learning the information. Study findings suggest that decisions regarding the timing to inform about genetic risk and offer testing should be tailored to the individual needs of the child and his/her family.
ImportanceTransthyretin amyloid cardiomyopathy (ATTR-CM) is a form of heart failure (HF) with preserved ejection fraction (HFpEF). Technetium Tc 99m pyrophosphate scintigraphy (PYP) enables ATTR-CM diagnosis. It is unclear which patients with HFpEF have sufficient risk of ATTR-CM to warrant PYP.ObjectiveTo derive and validate a simple ATTR-CM score to predict increased risk of ATTR-CM in patients with HFpEF.Design, Setting, and ParticipantsRetrospective cohort study of 666 patients with HF (ejection fraction ≥ 40%) and suspected ATTR-CM referred for PYP at Mayo Clinic, Rochester, Minnesota, from May 10, 2013, through August 31, 2020. These data were analyzed September 2020 through December 2020. A logistic regression model predictive of ATTR-CM was derived and converted to a point-based ATTR-CM risk score. The score was further validated in a community ATTR-CM epidemiology study of older patients with HFpEF with increased left ventricular wall thickness ([WT] ≥ 12 mm) and in an external (Northwestern University, Chicago, Illinois) HFpEF cohort referred for PYP. Race was self-reported by the participants. In all cohorts, both case patients and control patients were definitively ascertained by PYP scanning and specialist evaluation.Main Outcomes and MeasuresPerformance of the derived ATTR-CM score in all cohorts (referral validation, community validation, and external validation) and prevalence of a high-risk ATTR-CM score in 4 multinational HFpEF clinical trials.ResultsParticipant cohorts included were referral derivation (n = 416; 13 participants [3%] were Black and 380 participants [94%] were White; ATTR-CM prevalence = 45%), referral validation (n = 250; 12 participants [5%]were Black and 228 participants [93%] were White; ATTR-CM prevalence = 48% ), community validation (n = 286; 5 participants [2%] were Black and 275 participants [96%] were White; ATTR-CM prevalence = 6% ), and external validation (n = 66; 23 participants [37%] were Black and 36 participants [58%] were White; ATTR-CM prevalence = 39%). Score variables included age, male sex, hypertension diagnosis, relative WT more than 0.57, posterior WT of 12 mm or more, and ejection fraction less than 60% (score range −1 to 10). Discrimination (area under the receiver operating characteristic curve [AUC] 0.89; 95% CI, 0.86-0.92; P < .001) and calibration (Hosmer-Lemeshow; χ2 = 4.6; P = .46) were strong. Discrimination (AUC ≥ 0.84; P < .001 for all) and calibration (Hosmer-Lemeshow χ2 = 2.8; P = .84; Hosmer-Lemeshow χ2 = 4.4; P = .35; Hosmer-Lemeshow χ2 = 2.5; P = .78 in referral, community, and external validation cohorts, respectively) were maintained in all validation cohorts. Precision-recall curves and predictive value vs prevalence plots indicated clinically useful classification performance for a score of 6 or more (positive predictive value ≥25%) in clinically relevant ATTR-CM prevalence (≥10% of patients with HFpEF) scenarios. In the HFpEF clinical trials, 11% to 35% of male and 0% to 6% of female patients had a high-risk (≥6) ATTR-CM score.Conclusions and RelevanceA simple 6 variable clinical score may be used to guide use of PYP and increase recognition of ATTR-CM among patients with HFpEF in the community. Further validation in larger and more diverse populations is needed.
Objective Clinical knowledge-enriched transformer models (eg, ClinicalBERT) have state-of-the-art results on clinical natural language processing (NLP) tasks. One of the core limitations of these transformer models is the substantial memory consumption due to their full self-attention mechanism, which leads to the performance degradation in long clinical texts. To overcome this, we propose to leverage long-sequence transformer models (eg, Longformer and BigBird), which extend the maximum input sequence length from 512 to 4096, to enhance the ability to model long-term dependencies in long clinical texts. Materials and methods Inspired by the success of long-sequence transformer models and the fact that clinical notes are mostly long, we introduce 2 domain-enriched language models, Clinical-Longformer and Clinical-BigBird, which are pretrained on a large-scale clinical corpus. We evaluate both language models using 10 baseline tasks including named entity recognition, question answering, natural language inference, and document classification tasks. Results The results demonstrate that Clinical-Longformer and Clinical-BigBird consistently and significantly outperform ClinicalBERT and other short-sequence transformers in all 10 downstream tasks and achieve new state-of-the-art results. Discussion Our pretrained language models provide the bedrock for clinical NLP using long texts. We have made our source code available at https://github.com/luoyuanlab/Clinical-Longformer, and the pretrained models available for public download at: https://huggingface.co/yikuan8/Clinical-Longformer. Conclusion This study demonstrates that clinical knowledge-enriched long-sequence transformers are able to learn long-term dependencies in long clinical text. Our methods can also inspire the development of other domain-enriched long-sequence transformers.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.