We present a deep convolutional neural network for breast cancer screening exam classification, trained and evaluated on over 200,000 exams (over 1,000,000 images). Our network achieves an AUC of 0.895 in predicting whether there is a cancer in the breast, when tested on the screening population. We attribute the high accuracy of our model to a two-stage training procedure, which allows us to use a very high-capacity patch-level network to learn from pixel-level labels alongside a network learning from macroscopic breast-level labels. To validate our model, we conducted a reader study with 14 readers, each reading 720 screening mammogram exams, and find our model to be as accurate as experienced radiologists when presented with the same data. Finally, we show that a hybrid model, averaging probability of malignancy predicted by a radiologist with a prediction of our neural network, is more accurate than either of the two separately. To better understand our results, we conduct a thorough analysis of our network's performance on different subpopulations of the screening population, model design, training procedure, errors, and properties of its internal representations.deep learning | deep convolutional neural networks | breast cancer screening | mammography B reast cancer is the second leading cancer-related cause of death among women in the US. In 2014, over 39 million screening and diagnostic mammography exams were performed in the US. It is estimated that in 2015 232,000 women were diagnosed with breast cancer and approximately 40,000 died from it (1). Although mammography is the only imaging test that has reduced breast cancer mortality (2-4), there has been discussion regarding the potential harms of screening, including false positive recalls and associated false positive biopsies. The vast majority of the 10-15% of women asked to return following an inconclusive screening mammogram undergo another mammogram and/or ultrasound for clarification. After the additional imaging exams, many of these findings are determined as benign and only 10-20% are recommended to undergo a needle biopsy for further work-up. Among these, only 20-40% yield a diagnosis of cancer (5). Evidently, there is an unmet need to shift the balance of routine breast cancer screening towards more benefit and less harm.Traditional computer-aided detection (CAD) in mammography is routinely used by radiologists to assist with image interpretation, despite multicenter studies showing these CAD programs do not improve their diagnostic performance (6).These CAD programs typically use handcrafted features to mark sites on a mammogram that appear distinct from normal tissue structures. The radiologist decides whether to recall these findings, determining clinical significance and actionability. Recent developments in deep learning (7)-in particular, deep convolutional neural networks (CNNs) (8-12)-open possibilities for creating a new generation of CAD-like tools.This paper makes several contributions. Primarily, we train and evaluate a set of stro...
The integrated sensing and communication (ISAC), in which the sensing and communication share the same frequency band and hardware, has emerged as a key technology in future wireless systems due to two main reasons. First, many important application scenarios in fifth generation (5G) and beyond, such as autonomous vehicles, Wi-Fi sensing and extended reality, requires both high-performance sensing and wireless communications. Second, with millimeter wave and massive multiple-input multiple-output (MIMO) technologies widely employed in 5G and beyond, the future communication signals tend to have high-resolution in both time and angular domain, opening up the possibility for ISAC. As such, ISAC has attracted tremendous research interest and attentions in both academia and industry. Early works on ISAC have been focused on the design, analysis and optimization of practical ISAC technologies for various ISAC systems. While this line of works are necessary, it is equally important to study the fundamental limits of ISAC in order to understand the gap between the current state-of-the-art technologies and the performance limits, and provide useful insights and guidance for the development of better ISAC technologies that can approach the performance limits. In this paper, we aim to provide a comprehensive survey for the current research progress on the fundamental limits of ISAC. Particularly, we first propose a systematic classification method for both traditional radio sensing (such as radar sensing and wireless localization) and ISAC so that they can be naturally incorporated into a unified framework. Then we summarize the major performance metrics and bounds used in sensing, communications and ISAC, respectively. After that, we present the current research progresses on fundamental limits of each class Manuscript
Existing computational models for salient object detection primarily rely on hand-crafted features, which are only able to capture low-level contrast information. In this paper, we learn the hierarchical contrast features by formulating salient object detection as a binary labeling problem using deep learning techniques. A novel superpixelwise convolutional neural network approach, called SuperCNN, is proposed to learn the internal representations of saliency in an efficient manner. In contrast to the classical convolutional networks, SuperCNN has four main properties. First, the proposed method is able to learn the hierarchical contrast features, as it is fed by two meaningful superpixel sequences, which is much more effective for detecting salient regions than feeding raw image pixels. Second, as SuperCNN recovers the contextual information among superpixels, it enables large context to be involved in the analysis efficiently. Third, benefiting from the superpixelwise mechanism, the required number of predictions for a densely labeled map is hugely reduced. Fourth, saliency can be detected independent of region size by utilizing a multiscale network structure. Experiments show that SuperCNN can robustly detect salient Communicated by objects and outperforms the state-of-the-art methods on three benchmark datasets.
stretcher seat back saddle body input sketch (a) (b) (c) (d) wing horizontal stablizer vertical stablizer engine propeller inferred structure labeling front frame wheel handle pedal chain fork back frame gas lift base rail stile arm Figure 1: Our method performs simultaneous part-level segmentation and labeling of input sketches (a), using database components and their interrelations. It first produces many local interpretations (b), which are optimized into a global interpretation (c) that fits the input sketches as well as forming plausible structures, with which the input sketches can be appropriately labeled (d). AbstractWe present a data-driven approach to derive part-level segmentation and labeling of free-hand sketches, which depict single objects with multiple parts. Our method performs segmentation and labeling simultaneously, by inferring a structure that best fits the input sketch, through selecting and connecting 3D components in the database. The problem is formulated using Mixed Integer Programming, which optimizes over both the local fitness of the selected components and the global plausibility of the connected structure. Evaluations show that our algorithm is significantly better than the straightforward approaches based on direct retrieval or part assembly, and can effectively handle challenging variations in the sketch.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.