This paper addresses the general problem of built heritage protection against both deterioration and loss. In order to continuously monitor and update the structural health status, a crowd-sensing solution based on powerful and automatic deep learning technique is proposed. The aim of this solution is to get rid of the limitations of manual and visual damage detection methods that are costly and time consuming. Instead, automatic visual inspection for damage detection on walls is efficiently and effectively performed using an embedded Convolutional Neural Network (CNN). This CNN detects the most frequent types of surface damage on wall photos. The study has been conducted in the Kasbah of Algiers where the four following types of damages have been considered: Efflorescence, spall, crack, and mold. The CNN is designed and trained to be integrated into a mobile application for a participatory crowd-sensing solution. The application should be widely and freely deployed, so that any user can take a picture of a suspected damaged wall, and get an instant and automatic diagnosis, through the embedded CNN. In this context, we have chosen MobileNetV2 with a transfer learning approach. A set of real images have been collected and manually annotated, and have been used for training, validation, and test. Extensive experiments have been conducted in order to assess the efficiency and the effectiveness of the proposed solution, using a 5 fold cross validation procedure. Obtained results show in particular a mean weighted average precision of 0.868 ± 0.00862 (with a 99% of confidence level) and a mean weighted average recall of 0.84 ± 0.00729 (with a 99% of confidence level). To evaluate the performance of MobileNetV2 as a feature extractor, we conducted a comparative study with other small backbones. Further analysis of CNN activation using Grad-Cam has also been done. Obtained results show that our method remains effective even when using a small network and medium to low resolution images. MobileNetV2-based CNN size is smaller, and computational cost better, compared to the other CNNs, with similar performance results. Finally, detected surface damages have also been plotted on a geographic map, giving a global view of their distribution.
Coronavirus disease is a pandemic that has infected millions of people around the world. Lung CT-scans are effective diagnostic tools, but radiologists can quickly become overwhelmed by the flow of infected patients. Therefore, automated image interpretation needs to be achieved. Deep learning (DL) can support critical medical tasks including diagnostics, and DL algorithms have successfully been applied to the classification and detection of many diseases. This work aims to use deep learning methods that can classify patients between Covid-19 positive and healthy patient. We collected 4 available datasets, and tested our convolutional neural networks (CNNs) on different distributions to investigate the generalizability of our models. In order to clearly explain the predictions, Grad-CAM and Fast-CAM visualization methods were used. Our approach reaches more than 92% accuracy on 2 different distributions. In addition, we propose a computer aided diagnosis web application for Covid-19 diagnosis. The results suggest that our proposed deep learning tool can be integrated to the Covid-19 detection process and be useful for a rapid patient management.
Left bundle branch block (LBBB) is a frequent source of false positive MPI reports, in patients evaluated for coronary artery disease. Purpose: In this work, we evaluated the ability of a CNN-based solution, using transfer learning, to produce an expert-like judgment in recognizing LBBB false defects. Methods: We collected retrospectively, MPI polar maps, of patients having small to large fixed anteroseptal perfusion defect. Images were divided into two groups. The LBBB group included patients where this defect was judged as false defect by two experts. The LAD group included patients where this defect was judged as a true defect by two experts. We used a transfer learning approach on a CNN (ResNet50V2) to classify the images into two groups. Results: After 60 iterations, the reached accuracy plateau was 0.98, and the loss was 0.19 (the validation accuracy and loss were 0.91 and 0.25, respectively). A first test set of 23 images was used (11 LBBB, and 12 LAD). The empiric ROC (Receiver operating characteristic) Area was estimated at 0.98. A second test set (18x2 images) was collected after the final results. The ROC area was estimated again at 0.98. Conclusion: Artificial intelligence, using CNN and transfer learning, could reproduce an expert-like judgment in differentiating between LBBB false defects, and LAD real defects.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.