Background. The image-based identification of distinct tissues within dermatological wounds enhances patients' care since it requires no intrusive evaluations. This manuscript presents an approach, we named QTDU , that combines deep learning models with superpixel-driven segmentation methods for assessing the quality of tissues from dermatological ulcers. Method. QTDU consists of a three-stage pipeline for the obtaining of ulcer segmentation, tissues' labeling, and wounded area quantification. We set up our approach by using a real and annotated set of dermatological ulcers for training several deep learning models to the identification of ulcered superpixels. Results. Empirical evaluations on 179,572 superpixels divided into four classes showed QTDU accurately spot wounded tissues (AUC = 0.986, sensitivity = 0.97, and specificity = 0.974) and outperformed machinelearning approaches in up to 8.2% regarding F1-Score through fine-tuning of a ResNet-based model. Last, but not least, experimental evaluations also showed QTDU correctly quantified wounded tissue areas within a 0.089 Mean Absolute Error ratio. Conclusions. Results indicate QTDU effectiveness for both tissue segmentation and wounded area quantification tasks. When compared to existing machine-learning approaches, the combination of superpixels and deep learning models outperformed the competitors within strong significant levels. can be automatically evaluated by Computer-Aided Diagnosis (CAD) tools, or even used for the searching of massive databases through content-only queries, as in Content-Based Image Retrieval (CBIR) applications. In both CAD and CBIR cases, the detection of abnormalities requires the extraction of patterns from images, while a decision-making strategy is necessary for juxtaposing new images to those in the database [4,5].Since dermatological lesions are routinely diagnosed by biopsies and surrounding skin aspects, ulcers can be computationally characterized by particular types of tissues (and their areas) within the wounded region [6,7]. For instance, Mukherjee et al.[8] proposed a five-color classification model and applied a color-based low-level extractor further labeled by a Support-Vector Machine (SVM) strategy at an 87.61% hit ratio. Such idea of concatenating feature extraction and classification is found at the core of most wound segmentation strategies, as in the study of Kavitha et al.[9] that evaluated leg ulcerations by extracting patterns based on local spectral histograms to be labeled by a Multi-Layer Perceptron (MLP) classifier with 87.05% accuracy. Analogously, Pereyra et al.[10] discussed the use of color descriptors and an Instancebased Learning (IbL) classifier with a 61.7% hit ratio, whereas Veredas et al. [11] suggested the use of texture descriptors and an MLP classifier with 84.84% accuracy.Blanco et al.[4] and Chino et al.[12] followed a slightly different premise for finding proper similarity measures and comparison criteria for dermatological wounds. Their approaches are based on a divide-andconquer stra...
Abstract:Social media could provide valuable information to support decision making in crisis management, such as in accidents, explosions and fires. However, much of the data from social media are images, which are uploaded in a rate that makes it impossible for human beings to analyze them. Despite the many works on image analysis, there are no fire detection studies on social media. To fill this gap, we propose the use and evaluation of a broad set of content-based image retrieval and classification techniques for fire detection. Our main contributions are: (i) the development of the Fast-Fire Detection method (FFireDt), which combines feature extractor and evaluation functions to support instance-based learning; (ii) the construction of an annotated set of images with ground-truth depicting fire occurrences -the Flickr-Fire dataset; and (iii) the evaluation of 36 efficient image descriptors for fire detection. Using real data from Flickr, our results showed that FFireDt was able to achieve a precision for fire detection comparable to that of human annotators. Therefore, our work shall provide a solid basis for further developments on monitoring images from social media.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.