A critical step in effective care and treatment planning for severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), the cause for the coronavirus disease 2019 (COVID-19) pandemic, is the assessment of the severity of disease progression. Chest x-rays (CXRs) are often used to assess SARS-CoV-2 severity, with two important assessment metrics being extent of lung involvement and degree of opacity. In this proof-of-concept study, we assess the feasibility of computer-aided scoring of CXRs of SARS-CoV-2 lung disease severity using a deep learning system. Data consisted of 396 CXRs from SARS-CoV-2 positive patient cases. Geographic extent and opacity extent were scored by two board-certified expert chest radiologists (with 20+ years of experience) and a 2nd-year radiology resident. The deep neural networks used in this study, which we name COVID-Net S, are based on a COVID-Net network architecture. 100 versions of the network were independently learned (50 to perform geographic extent scoring and 50 to perform opacity extent scoring) using random subsets of CXRs from the study, and we evaluated the networks using stratified Monte Carlo cross-validation experiments. The COVID-Net S deep neural networks yielded R$$^2$$ 2 of $$0.664 \pm 0.032$$ 0.664 ± 0.032 and $$0.635 \pm 0.044$$ 0.635 ± 0.044 between predicted scores and radiologist scores for geographic extent and opacity extent, respectively, in stratified Monte Carlo cross-validation experiments. The best performing COVID-Net S networks achieved R$$^2$$ 2 of 0.739 and 0.741 between predicted scores and radiologist scores for geographic extent and opacity extent, respectively. The results are promising and suggest that the use of deep neural networks on CXRs could be an effective tool for computer-aided assessment of SARS-CoV-2 lung disease severity, although additional studies are needed before adoption for routine clinical use.
Abstract. Automated building footprints extraction from High Spatial Resolution (HSR) remote sensing images plays important roles in urban planning and management, and hazard and disease control. However, HSR images are not always available in practice. In these cases, super-resolution, especially deep learning (DL)-based methods, can provide higher spatial resolution images given lower resolution images. In a variety of remote sensing applications, DL based super-resolution methods are widely used. However, there are few studies focusing on the impact of DL-based super-resolution on building footprint extraction. As such, we present an exploration of this topic. Specifically, we first super-resolve the Massachusetts Building Dataset using bicubic interpolation, a pre-trained Super-Resolution CNN (SRCNN), a pre-trained Residual Channel Attention Network (RCAN), a pre-trained Residual Feature Aggregation Network (RFANet). Then, using the dataset under its original resolution, as well as the four different super-resolutions of the dataset, we employ the High-Resolution Network (HRNet) v2 to extract building footprints. Our experiments show that super-resolving either training or test datasets using the latest high-performance DL-based super-resolution method can improve the accuracy of building footprints extraction. Although SRCNN based building footprint extraction gives the highest Overall Accuracy, Intersection of Union and F1 score, we suggest using the latest super-resolution method to process images before building footprint extraction due to the fixed scale ratio of pre-trained SRCNN and low speed of convergence in training.
Abstract. The nighttime light (NTL) remote sensed imagery has been applied in monitoring human activities from many perspectives. As the two most widely used NTL satellites, the Defense Meteorological Satellite Program (DMSP) Operational Linescan System and the Suomi National Polar-orbiting Partnership (NPP)-Visible Infrared Imaging Radiometer Suite (VIIRS) have different spatial and radiometric resolutions. Thus, some long-time series analysis cannot be conducted without effective and accurate cross-calibration of these two datasets. In this study, we proposed a deep-learning based model to simulate VIIRS-liked DMSP NTL data by integrating the enhanced vegetation index (EVI) data product from MODIS. By evaluating the spatial pattern of the results, the modified Self-Supervised Sparse-to-Dense networks delivered satisfying results of spatial resolution downscaling. The quantitative analysing of the simulated VIIRS-liked DMSP NTL with original VIIRS NTL showed a good consistency at the pixel level of four selected sub datasets with R2 ranging from 0.64 to 0.76, and RMSE ranging from 3.96-9.55. Our method presents that the deep learning model can learn from relatively raw data instead of fine processed data based on expert knowledge to cross-sensor calibration and simulation NTL data.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.