Lung cancer is one of the deadliest cancers, and in part its effective diagnosis and treatment depend on the accurate delineation of the tumor. Human-centered segmentation, which is currently the most common approach, is subject to inter-observer variability, and is also time-consuming, considering the fact that only experts are capable of providing annotations. Automatic and semi-automatic tumor segmentation methods have recently shown promising results. However, as different researchers have validated their algorithms using various datasets and performance metrics, reliably evaluating these methods is still an open challenge. The goal of the Lung-Originated Tumor Segmentation from Computed Tomography Scan (LOTUS) Benchmark created through 2018 IEEE Video and Image Processing (VIP) Cup competition, is to provide a unique dataset and pre-defined metrics, so that different researchers can develop and evaluate their methods in a unified fashion. The 2018 VIP Cup started with a global engagement from 42 countries to access the competition data. At the registration stage, there were 129 members clustered into 28 teams from 10 countries, out of which 9 teams made it to the final stage and 6 teams successfully completed all the required tasks. In a nutshell, all the algorithms proposed during the competition, are based on deep learning models combined with a false positive reduction technique. Methods developed by the three finalists show promising results in tumor segmentation, however, more effort should be put into reducing the false positive rate. This competition manuscript presents an overview of the VIP-Cup challenge, along with the proposed algorithms and results.
In this paper we propose the Localized Quality Aware Image Denoising (LQAID) technique for image denoising using deep convolutional neural networks (CNNs). LQAID relies on local quality estimates over global cues like noise standard deviation since the perceptual quality of a noisy image is typically spatially varying. Specifically, we use localized quality maps generated using DistNet, a spatial quality map estimation method. These quality maps are used to augment the noisy image and guide the denoising process. The augmented noisy image is denoised using a deep fully convolutional network (FCN) trained using mean square error (MSE) as the loss function. The proposed approach shows state-of-the-art performance both qualitatively and quantitatively on two vision datasets: TID 2008 and BSD500. We also show that the proposed approach possesses excellent generalization ability. Lastly, the proposed approach is completely blind since it neither requires information about the strength of the additive noise nor does it try to explicitly estimate it.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.