Noise or artifacts in an image, such as shadow artifacts, deteriorate the performance of stateof-the-art models for the segmentation of an image. In this study, a novel saliency-based region detection and image segmentation (SRIS) model is proposed to overcome the problem of image segmentation in the existence of noise and intensity inhomogeneity. Herein, a novel adaptive level-set evolution protocol based on the internal and external functions is designed to eliminate the initialization sensitivity, thereby making the proposed SRIS model robust to contour initialization. In the level-set energy function, an adaptive weight function is formulated to adaptively alter the intensities of the internal and external energy functions based on image information. In addition, the sign of energy function is modulated depending on the internal and external regions to eliminate the effects of noise in an image. Finally, the performance of the proposed SRIS model is illustrated on complex real and synthetic images and compared with that of the previously reported state-of-the-art models. Moreover, statistical analysis has been performed on coronavirus disease (COVID-19) computed tomography images and THUS10000 real image datasets to confirm the superior performance of the SRIS model from the viewpoint of both segmentation accuracy and time efficiency. Results suggest that SRIS is a promising approach for early screening of COVID-19.
Active contour models have achieved prominent success in the area of image segmentation, allowing complex objects to be segmented from the background for further analysis. Existing models can be divided into region-based active contour models and edge-based active contour models. However, both models use direct image data to achieve segmentation and face many challenging problems in terms of the initial contour position, noise sensitivity, local minima and inefficiency owing to the in-homogeneity of image intensities. The saliency map of an image changes the image representation, making it more visual and meaningful. In this study, we propose a novel model that uses the advantages of a saliency map with local image information (LIF) and overcomes the drawbacks of previous models. The proposed model is driven by a saliency map of an image and the local image information to enhance the progress of the active contour models. In this model, the saliency map of an image is first computed to find the saliency driven local fitting energy. Then, the saliency-driven local fitting energy is combined with the LIF model, resulting in a final novel energy functional. This final energy functional is formulated through a level set formulation, and regulation terms are added to evolve the contour more precisely across the object boundaries. The quality of the proposed method was verified on different synthetic images, real images and publicly available datasets, including medical images. The image segmentation results, and quantitative comparisons confirmed the contour initialization independence, noise insensitivity, and superior segmentation accuracy of the proposed model in comparison to the other segmentation models.
Fashion image analysis has attracted significant research attention owing to the availability of large-scale fashion datasets with rich annotations. However, existing deep learning models for fashion datasets often have high computational requirements. In this study, we propose a new model suitable for low-power devices. The proposed network is a one-stage detector that rapidly detects multiple cloths and landmarks in fashion images. The network is designed as a modification of the EfficientDet originally proposed by Google Brain. The proposed network simultaneously trains the core input features with different resolutions and applies compound scaling to the backbone feature network. The bounding box/class/landmark prediction networks maintain the balance between the speed and accuracy. Moreover, a low number of parameters and low computational cost make it efficient. Without image preprocessing, we achieved 0.686 mean average precision (mAP) in the bounding box detection and 0.450 mAP in the landmark estimation on the DeepFashion2 validation dataset with an inference time of 42 ms. We obtained optimal results in extensive experiments with loss functions and optimizers. Furthermore, the proposed method has the advantage of operating in low-power devices.
Image inhomogeneity often occurs in real-world images and may present considerable difficulties during image segmentation. Therefore, this paper presents a new approach for the segmentation of inhomogeneous images. The proposed hybrid active contour model is formulated by combining the statistical information of both the local and global region-based energy fitting models. The inclusion of the local region-based energy fitting model assists in extracting the inhomogeneous intensity regions, whereas the curve evolution over the homogeneous regions is accelerated by including the global region-based model in the proposed method. Both the local and global region-based energy functions in the proposed model drag contours toward the accurate object boundaries with precision. Each of the local and global region-based parts are parameterized with weight coefficients, based on image complexity, to modulate two parts. The proposed hybrid model is strongly capable of detecting region of interests (ROIs) in the presence of complex object boundaries and noise, as its local region-based part comprises bias field. Moreover, the proposed method includes a new bias field (NBF) initialization and eliminates the dependence over the initial contour position. Experimental results on synthetic and real-world images, produced by the proposed model, and comparative analysis with previous state-of-the-art methods confirm its superior performance in terms of both time efficiency and segmentation accuracy. INDEX TERMS Active contours, bias field, image segmentation, intensity inhomogeneity, level set.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.