Abstract. Early detection of Ground Glass Nodule (GGN) in lungComputed Tomography (CT) images is important for lung cancer prognosis. Due to its indistinct boundaries, manual detection and segmentation of GGN is labor-intensive and problematic. In this paper, we propose a novel multi-level learning-based framework for automatic detection and segmentation of GGN in lung CT images. Our main contributions are: firstly, a multi-level statistical learning-based approach that seamlessly integrates segmentation and detection to improve the overall accuracy for GGN detection (in a subvolume). The classification is done at two levels, both voxel-level and object-level. The algorithm starts with a three-phase voxel-level classification step, using volumetric features computed per voxel to generate a GGN class-conditional probability map. GGN candidates are then extracted from this probability map by integrating prior knowledge of shape and location, and the GGN object-level classifier is used to determine the occurrence of the GGN. Secondly, an extensive set of volumetric features are used to capture the GGN appearance. Finally, to our best knowledge, the GGN dataset used for experiments is an order of magnitude larger than previous work. The effectiveness of our method is demonstrated on a dataset of 1100 subvolumes (100 containing GGNs) extracted from about 200 subjects.
In this paper, we propose a learning-based algorithm for automatic medical image annotation based on robust aggregation of learned local appearance cues, achieving high accuracy and robustness against severe diseases, imaging artifacts, occlusion, or missing data. The algorithm starts with a number of landmark detectors to collect local appearance cues throughout the image, which are subsequently verified by a group of learned sparse spatial configuration models. In most cases, a decision could already be made at this stage by simply aggregating the verified detections. For the remaining cases, an additional global appearance filtering step is employed to provide complementary information to make the final decision. This approach is evaluated on a large-scale chest radiograph view identification task, demonstrating a very high accuracy ( > 99.9%) for a posteroanterior/anteroposterior (PA-AP) and lateral view position identification task, compared with the recently reported large-scale result of only 98.2% (Luo, , 2006). Our approach also achieved the best accuracies for a three-class and a multiclass radiograph annotation task, when compared with other state of the art algorithms. Our algorithm was used to enhance advanced image visualization workflows by enabling content-sensitive hanging-protocols and auto-invocation of a computer aided detection algorithm for identified PA-AP chest images. Finally, we show that the same methodology could be utilized for several image parsing applications including anatomy/organ region of interest prediction and optimized image visualization.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.