An electrocardiogram (ECG) is a basic and quick test for evaluating cardiac disorders and is crucial for remote patient monitoring equipment. An accurate ECG signal classification is critical for real-time measurement, analysis, archiving, and transmission of clinical data. Numerous studies have focused on accurate heartbeat classification, and deep neural networks have been suggested for better accuracy and simplicity. We investigated a new model for ECG heartbeat classification and found that it surpasses state-of-the-art models, achieving remarkable accuracy scores of 98.5% on the Physionet MIT-BIH dataset and 98.28% on the PTB database. Furthermore, our model achieves an impressive F1-score of approximately 86.71%, outperforming other models, such as MINA, CRNN, and EXpertRF on the PhysioNet Challenge 2017 dataset.
Accurately segmented nuclei are important, not only for cancer classification, but also for predicting treatment effectiveness and other biomedical applications. However, the diversity of cell types, various external factors, and illumination conditions make nucleus segmentation a challenging task. In this work, we present a new deep learning-based method for cell nucleus segmentation. The proposed convolutional blur attention (CBA) network consists of downsampling and upsampling procedures. A blur attention module and a blur pooling operation are used to retain the feature salience and avoid noise generation in the downsampling procedure. A pyramid blur pooling (PBP) module is proposed to capture the multi-scale information in the upsampling procedure. The superiority of the proposed method has been compared with a few prior segmentation models, namely U-Net, ENet, SegNet, LinkNet, and Mask RCNN on the 2018 Data Science Bowl (DSB) challenge dataset and the multi-organ nucleus segmentation (MoNuSeg) at MICCAI 2018. The Dice similarity coefficient and some evaluation matrices, such as F1 score, recall, precision, and average Jaccard index (AJI) were used to evaluate the segmentation efficiency of these models. Overall, the proposal method in this paper has the best performance, the AJI indicator on the DSB dataset and MoNuSeg is 0.8429, 0.7985, respectively.
This study presents a noninvasive visual sensing enhancing system for skin lesion segmentation. According to the Skin Cancer Foundation, skin cancer kills more than two people every hour in the United States, and one in every five Americans will develop the disease. Skin cancer is becoming more popular, so the need for skin cancer diagnosis is increasing, particularly for melanoma, which has a high metastasis rate. Many traditional algorithms, as well as a computer-aided diagnosis tool, have been implemented in dermoscopic images for skin lesion segmentation to meet this need. However, the accuracy of the model is low, and the prognosis time is lengthy. This paper presents antialiasing attention spatial convolution (AASC) to segment melanoma skin lesions in dermoscopic images. Such a system can enhance the existing Medical IoT (MIoT) applications and provide third-party clues for medical examiners. Empirical results show that the AASC performs well when it is able to overcome dermoscopic limitations such as thick hair, low contrast, or shape and color distortion. The model was evaluated strictly under many statistical evaluation metrics such as the Jaccard index, Recall, Precision, F1 score, and Dice coefficient. The performance of the AASC was trained and tested. Remarkably, the AASC model yielded the highest scores in both three databases compared with the state-of-the-art models across three datasets: ISIC 2016, ISIC 2017, and PH2.
The need for a lightweight and reliable segmentation algorithm is critical in various biomedical image-prediction applications. However, the limited quantity of data presents a significant challenge for image segmentation. Additionally, low image quality negatively impacts the efficiency of segmentation, and previous deep learning models for image segmentation require large parameters with hundreds of millions of computations, resulting in high costs and processing times. In this study, we introduce a new lightweight segmentation model, the mobile anti-aliasing attention u-net model (MAAU), which features both encoder and decoder paths. The encoder incorporates an anti-aliasing layer and convolutional blocks to reduce the spatial resolution of input images while avoiding shift equivariance. The decoder uses an attention block and decoder module to capture prominent features in each channel. To address data-related problems, we implemented data augmentation methods such as flip, rotation, shear, translate, and color distortions, which enhanced segmentation efficiency in the international Skin Image Collaboration (ISIC) 2018 and PH2 datasets. Our experimental results demonstrated that our approach had fewer parameters, only 4.2 million, while it outperformed various state-of-the-art segmentation methods.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.