Our extensive experimental results on two well-established public benchmark datasets demonstrate that our method is more effective than other state-of-the-art methods for skin lesion segmentation.
Social media (and the world at large) have been awash with news of the COVID-19 pandemic. With the passage of time, news and awareness about COVID-19 spread like the pandemic itself, with an explosion of messages, updates, videos, and posts. Mass hysteria manifest as another concern in addition to the health risk that COVID-19 presented. Predictably, public panic soon followed, mostly due to misconceptions, a lack of information, or sometimes outright misinformation about COVID-19 and its impacts. It is thus timely and important to conduct an ex post facto assessment of the early information flows during the pandemic on social media, as well as a case study of evolving public opinion on social media which is of general interest. This study aims to inform policy that can be applied to social media platforms; for example, determining what degree of moderation is necessary to curtail misinformation on social media. This study also analyzes views concerning COVID-19 by focusing on people who interact and share social media on Twitter. As a platform for our experiments, we present a new large-scale sentiment data set COVIDSENTI, which consists of 90 000 COVID-19-related tweets collected in the early stages of the pandemic, from February to March 2020. The tweets have been labeled into positive, negative, and neutral sentiment classes. We analyzed the collected tweets for sentiment classification using different sets of features and classifiers. Negative opinion played an important role in conditioning public sentiment, for instance, we observed that people favored lockdown earlier in the pandemic; however, as expected, sentiment shifted by mid-March. Our study supports the view that there is a need to develop a proactive and agile public health presence to combat the spread of negative sentiment on social media following a pandemic.
The analysis of multi-modality positron emission tomography and computed tomography (PET-CT) images requires combining the sensitivity of PET to detect abnormal regions with anatomical localization from CT. However, current methods for PET-CT image analysis either process the modalities separately or fuse information from each modality based on knowledge about the image analysis task. These methods generally do not consider the spatially varying visual characteristics that encode different information across the different modalities, which have different priorities at different locations. For example, a high abnormal PET uptake in the lungs is more meaningful for tumor detection than physiological PET uptake in the heart. Our aim is to improve fusion of the complementary information in multi-modality PET-CT with a new supervised convolutional neural network (CNN) that learns to fuse complementary information for multi-modality medical image analysis. Our CNN first encodes modality-specific features and then uses them to derive a spatially varying fusion map that quantifies the relative importance of each modality's features across different spatial locations. These fusion maps are then multiplied with the modality-specific feature maps to obtain a representation of the complementary multi-modality information at different locations, which can then be used for image analysis, e.g. region detection. We evaluated our CNN on a region detection problem using a dataset of PET-CT images of lung cancer. We compared our method to baseline techniques for multi-modality image analysis (pre-fused inputs, multi-branch techniques, multichannel techniques) and demonstrated that our approach had a significantly higher accuracy (p < 0.05) than the baselines.
The segmentation of skin lesions in dermoscopic images is a fundamental step in automated computer-aided diagnosis of melanoma. Conventional segmentation methods, however, have difficulties when the lesion borders are indistinct and when contrast between the lesion and the surrounding skin is low. They also perform poorly when there is a heterogeneous background or a lesion that touches the image boundaries; this then results in under- and oversegmentation of the skin lesion. We suggest that saliency detection using the reconstruction errors derived from a sparse representation model coupled with a novel background detection can more accurately discriminate the lesion from surrounding regions. We further propose a Bayesian framework that better delineates the shape and boundaries of the lesion. We also evaluated our approach on two public datasets comprising 1100 dermoscopic images and compared it to other conventional and state-of-the-art unsupervised (i.e., no training required) lesion segmentation methods, as well as the state-of-the-art unsupervised saliency detection methods. Our results show that our approach is more accurate and robust in segmenting lesions compared to other methods. We also discuss the general extension of our framework as a saliency optimization algorithm for lesion segmentation.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.