IMPORTANCE Mammography screening currently relies on subjective human interpretation. Artificial intelligence (AI) advances could be used to increase mammography screening accuracy by reducing missed cancers and false positives. OBJECTIVE To evaluate whether AI can overcome human mammography interpretation limitations with a rigorous, unbiased evaluation of machine learning algorithms. DESIGN, SETTING, AND PARTICIPANTS In this diagnostic accuracy study conducted between September 2016 and November 2017, an international, crowdsourced challenge was hosted to foster AI algorithm development focused on interpreting screening mammography. More than 1100 participants comprising 126 teams from 44 countries participated. Analysis began November 18, 2016. MAIN OUTCOMES AND MEASUREMENTS Algorithms used images alone (challenge 1) or combined images, previous examinations (if available), and clinical and demographic risk factor data (challenge 2) and output a score that translated to cancer yes/no within 12 months. Algorithm accuracy for breast cancer detection was evaluated using area under the curve and algorithm specificity compared with radiologists' specificity with radiologists' sensitivity set at 85.9% (United States) and 83.9% (Sweden). An ensemble method aggregating top-performing AI algorithms and radiologists' recall assessment was developed and evaluated. RESULTS Overall, 144 231 screening mammograms from 85 580 US women (952 cancer positive Յ12 months from screening) were used for algorithm training and validation. A second independent validation cohort included 166 578 examinations from 68 008 Swedish women (780 cancer positive). The top-performing algorithm achieved an area under the curve of 0.858 (United States) and 0.903 (Sweden) and 66.2% (United States) and 81.2% (Sweden) specificity at the radiologists' sensitivity, lower than community-practice radiologists' specificity of 90.5% (United States) and 98.5% (Sweden). Combining top-performing algorithms and US radiologist assessments resulted in a higher area under the curve of 0.942 and achieved a significantly improved specificity (92.0%) at the same sensitivity. CONCLUSIONS AND RELEVANCE While no single AI algorithm outperformed radiologists, an ensemble of AI algorithms combined with radiologist assessment in a single-reader screening environment improved overall accuracy. This study underscores the potential of using machine (continued)
The incidence of prostate cancer (PCa) within Asian population used to be much lower than in the Western population; however, in recent years the incidence and mortality rate of PCa in some Asian countries have grown rapidly. This collaborative report summarized the latest epidemiology information, risk factors, and racial differences in PCa diagnosis, current status and new trends in surgery management and novel agents for castration-resistant prostate cancer. We believe such information would be helpful in clinical decision making for urologists and oncologists, health-care ministries and medical researchers.
We propose a fully unsupervised multi-modal deformable image registration method (UMDIR), which does not require any ground truth deformation fields or any aligned multi-modal image pairs during training. Multi-modal registration is a key problem in many medical image analysis applications. It is very challenging due to complicated and unknown relationships between different modalities. In this paper, we propose an unsupervised learning approach to reduce the multi-modal registration problem to a mono-modal one through image disentangling. In particular, we decompose images of both modalities into a common latent shape space and separate latent appearance spaces via an unsupervised multi-modal image-to-image translation approach. The proposed registration approach is then built on the factorized latent shape code, with the assumption that the intrinsic shape deformation existing in original image domain is preserved in this latent space. Specifically, two metrics have been proposed for training the proposed network: a latent similarity metric defined in the common shape space and a learningbased image similarity metric based on an adversarial loss. We examined different variations of our proposed approach and compared them with conventional state-of-the-art multi-modal registration methods. Results show that our proposed methods achieve competitive performance against other methods at substantially reduced computation time.
Foods high in resistant starch (RS) are beneficial to prevent various diseases including diabetes, colon cancers, diarrhea and chronic renal or hepatic diseases. Elevated RS in rice is important for public health since rice is a staple food for half of the world population. A japonica mutant ‘Jiangtangdao 1’ (RS = 11.67%) was crossed with an indica cultivar ‘Miyang 23’ (RS = 0.41%). The mutant sbe3-rs that explained 60.4% of RS variation was mapped between RM6611 and RM13366 on chromosome 2 (LOD = 36) using 178 F2 plants genotyped with 106 genome-wide polymorphic SSR markers. Using 656 plants from four F3∶4 families, sbe3-rs was fine mapped to a 573.3 Kb region between InDel 2 and InDel 6 using one STS, five SSRs and seven InDel markers. SBE3 which codes for starch branching enzyme was identified as a candidate gene within the putative region. Nine pairs of primers covering 22 exons were designed to sequence genomic DNA of the wild type for SBE3 and the mutant for sbe3-rs comparatively. Sequence analysis identified a missense mutation site where Leu-599 of the wild was changed to Pro-599 of the mutant in the SBE3 coding region. Because the point mutation resulted in the loss of a restriction enzyme site, sbe3-rs was not digested by a CAPS marker for SpeI site while SBE3 was. Co-segregation of the digestion pattern with RS content among 178 F2 plants further supported sbe3-rs responsible for RS in rice. As a result, the CAPS marker could be used in marker-assisted breeding to develop rice cultivars with elevated RS which is otherwise difficult to accurately assess in crops. Transgenic technology should be employed for a definitive conclusion of the sbe3-rs.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.