We use the landslide inventory database provided by the United States Geological Survey. USGS maintains a database of landslide reports with approximate locations and times, but no images. This is the most extensive data of its kind. We extract satellite images from Google Earth by using this inventory.<br>
<p>Landslides are common natural disasters around the globe. Understanding the accurate spatial distribution of landslides is essential for landslide analysis, prediction, and hazard mitigation. So far, many techniques have been used for landslide mapping to establish landslide inventories. However, these techniques either have a low automation level (e.g., visual interpretation-based methods) or a low generalization ability (e.g., pixel-based or object-based approaches); and improvements are required for landslide mapping. Therefore, we have developed an interactive, user-friendly web portal for landslide labeling. The web portal takes multi-temporal satellite images as inputs. A deep learning model will first detect landslide-suspicious areas in the image and present results to users for validation. Users can then review and annotate these machine-labeled landslides through a user-friendly interface. Users&#8217; editions on landslide annotation will further improve the accuracy of the deep learning model. Two landslide-affected regions in Washington were selected to test the capability of our web portal for landslide mapping. The detected landslides were validated by expert labelers. The results indicated that our annotation tool was able to produce landslide maps with high precision, a high rate of annotation, and reduced human efforts.</p>
In this article, we consider the scenario where remotely sensed images are collected sequentially in temporal batches, where each batch focuses on images from a particular ecoregion, but different batches can focus on different ecoregions with distinct landscape characteristics. For such a scenario, we study the following questions: (1) How well do DL models trained in homogeneous regions perform when they are transferred to different ecoregions, (2) Does increasing the spatial coverage in the data improve model performance in a given ecoregion (even when the extra data do not come from the ecoregion), and (3) Can a landslide pixel labelling model be incrementally updated with new data, but without access to the old data and without losing performance on the old data (so that researchers can share models obtained from proprietary datasets)? We address these questions by a framework called Task-Specific Model Updates (TSMU). The goal of this framework is to continually update a (landslide) semantic segmentation model with data from new ecoregions without having to revisit data from old ecoregions and without losing performance on them. We conduct extensive experiments on four ecoregions in the United States to address the above questions and establish that data from other ecoregions can help improve the model's performance on the original ecoregion. In other words, if one has an ecoregion of interest, one could still collect data both inside and outside that region to improve model performance on the ecoregion of interest. Furthermore, if one has many ecoregions of interest, data from all of them are needed.<br>
We use the landslide inventory database provided by the United States Geological Survey. USGS maintains a database of landslide reports with approximate locations and times, but no images. This is the most extensive data of its kind. We extract satellite images from Google Earth by using this inventory.<br>
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.