Preparation and mitigation efforts for widespread landslide hazards can be aided by a large-scale, well-labeled landslide inventory with high location accuracy. Recent smallscale studies for pixel-wise labeling of potential landslide areas in remotely-sensed images using deep learning (DL) showed potential but were based on data from very small, homogeneous regions with unproven model transferability. In this paper we consider a more realistic and practical setting for large-scale heterogeneous landslide data collection and DL-based labeling. In this setting, remotely sensed images are collected sequentially in temporal batches, where each batch focuses on images from a particular ecoregion, but different batches can focus on different ecoregions with distinct landscape characteristics. For such a scenario, we study the following questions: (1) How well do DL models trained in homogeneous regions perform when they are transferred to different ecoregions, (2) Does increasing the spatial coverage in the data improve model performance in a given ecoregion (even when the extra data do not come from the ecoregion), and (3) Can a landslide pixel labeling model be incrementally updated with new data, but without access to the old data and without losing performance on the old data (so that researchers can share models obtained from proprietary datasets)? We address these questions by extending the Learning without Forgetting framework, which is used for incremental training of image classification models, to the setting of incremental training of semantic segmentation models (e.g., identifying all landslide pixels in an image). We call the resulting extension Task-Specific Model Updates (TSMU). TSMU semantic segmentation framework consists of an encoder shared by all ecoregions to capture the similarities between them, and ecoregion-specific decoders to capture the nuances of each ecoregion. This framework is continually updated using a threestage training procedure for each new addition of an ecoregion without having to revisit data from old ecoregions and without losing performance on them.A national compilation of landslide inventories by the U.S. Geological Survey (USGS) was used to develop database for this study. We focused on space-visible landslides within four ecoregions. These landslides were manually identified and labeled using high-resolution satellite images from Google Earth. The database contains 496 labeled and georeferenced pre-event/postevent images pairs, which corresponding to 1,918 landslide records in the USGS landslide inventory. Using the TSMU framework, we conduct extensive experiments on four ecoregions in the United States to address the aforementioned questions.
We use the landslide inventory database provided by the United States Geological Survey. USGS maintains a database of landslide reports with approximate locations and times, but no images. This is the most extensive data of its kind. We extract satellite images from Google Earth by using this inventory.<br>
<p>Landslides are common natural disasters around the globe. Understanding the accurate spatial distribution of landslides is essential for landslide analysis, prediction, and hazard mitigation. So far, many techniques have been used for landslide mapping to establish landslide inventories. However, these techniques either have a low automation level (e.g., visual interpretation-based methods) or a low generalization ability (e.g., pixel-based or object-based approaches); and improvements are required for landslide mapping. Therefore, we have developed an interactive, user-friendly web portal for landslide labeling. The web portal takes multi-temporal satellite images as inputs. A deep learning model will first detect landslide-suspicious areas in the image and present results to users for validation. Users can then review and annotate these machine-labeled landslides through a user-friendly interface. Users&#8217; editions on landslide annotation will further improve the accuracy of the deep learning model. Two landslide-affected regions in Washington were selected to test the capability of our web portal for landslide mapping. The detected landslides were validated by expert labelers. The results indicated that our annotation tool was able to produce landslide maps with high precision, a high rate of annotation, and reduced human efforts.</p>
Data center is a cost-effective infrastructure for storing large volumes of data and hosting large-scale service applications. Cloud computing service providers are rapidly deploying data centers across the world with a huge number of servers and switches. These data centers consume significant amounts of energy, contributing to high operational costs. Thus, optimizing the energy consumption of servers and networks in data centers can reduce operational costs. In a data center, power consumption is mainly due to servers, networking devices, and cooling systems, and an effective energy-saving strategy is to consolidate the computation and communication into a smaller number of servers and network devices and then power off as many unneeded servers and network devices as possible.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.