Breast cancer is a fatal disease affecting women, and early detection and proper treatment are crucial. Classifying medical images correctly is the first and most important step in the cancer diagnosis stage. Deep learning-based classification methods in various domains demonstrate advances in accuracy. However, as deep learning improves, the layers of neural networks get deeper, raising challenges, such as overfitting and gradient vanishing. For instance, a medical image is simpler than an ordinary one, making it vulnerable to overfitting issues. We present breast histopathological classification methods with two deep neural networks, Xception and LightXception with aid of voting schemes over split images. Most deep neural networks classify thousands classes of images, but the breast histopathological image classes are far fewer than those of other image classification tasks. Because the BreakHis dataset is relatively simpler than typical image datasets, such as ImageNet, applying the conventional highly deep neural networks may suffer from the aforementioned overfitting or gradient vanishing problems. Additionally, highly deep neural networks require more resources, leading to high computational costs. Consequently, we propose a new network; LightXception by cutting off layers at the bottom of the Xception network and reducing the number of channels of convolution filters. LightXception has only about 35% of parameters compared to those of the original Xception network with minimal expense on performance. Based on images with 100X magnification factor, the performance comparisons for Xception vs. LightXception are 97.42% vs. 97.31% on classification accuracy, 97.42% vs. 97.42% on recall, and 99.26% vs. 98.67% of precision.
Image analysis based on machine vision is hugely manipulated in the smart industry. Good-quality images are required for outstanding machine analysis results, but handling high-definition images could be problematic in a constrained environment such as a low-bandwidth network or low-capacity storage. Lowering the image resolution might be a straightforward solution for reducing image data, but it would occasion much information loss, leading to the deterioration of machine vision. Moreover, human supervision could be necessary for a contingency that machine vision cannot control.Therefore, an innovative image compression method considering machine and human vision is required; more compression efficiency than the state-ofthe-art codec, praiseworthy machine vision performance, and human-recognizable quality. In this paper, we propose Versatile video coding(VVC) based image compression for hybrid vision, i.e., machine vision and human vision. Our work provides a coding tree unit(CTU) level image compression with dual quantization parameters (QPs) according to the quantization parameter map and the saliency extracted by the object detection network; in the salient region, the proposed method maintains high quality with low QP but degrades the quality with high QP in the non-salient region.Compared with VVC, the proposed compression method achieves a bitrate reduction of up to 25.5% in machine vision tasks, proving more compression efficiency and still admirable machine vision performance. From the perspective of human vision, the proposed method provides human-perceptible image quality, preserving acceptable objective quality values.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.