Modern advances in cloud computing and machine-leaning algorithms are shifting the manner in which Earth-observation (EO) data are used for environmental monitoring, particularly as we settle into the era of free, open-access satellite data streams. Wetland delineation represents a particularly worthy application of this emerging research trend, since wetlands are an ecologically important yet chronically under-represented component of contemporary mapping and monitoring programs, particularly at the regional and national levels. Exploiting Google Earth Engine and R Statistical software, we developed a workflow for predicting the probability of wetland occurrence using a boosted regression tree machine-learning framework applied to digital topographic and EO data. Working in a 13,700 km 2 study area in northern Alberta, our best models produced excellent results, with AUC (area under the receiver-operator characteristic curve) values of 0.898 and explained-deviance values of 0.708. Our results demonstrate the central role of high-quality topographic variables for modeling wetland distribution at regional scales. Including optical and/or radar variables into the workflow substantially improved model performance, though optical data performed slightly better. Converting our wetland probability-of-occurrence model into a binary Wet-Dry classification yielded an overall accuracy of 85%, which is virtually identical to that derived from the Alberta Merged Wetland Inventory (AMWI): the contemporary inventory used by the Government of Alberta. However, our workflow contains several key advantages over that used to produce the AMWI, and provides a scalable foundation for province-wide monitoring initiatives.
Humans affect fire regimes by providing ignition sources in some cases, suppressing wildfires in others, and altering natural vegetation in ways that may either promote or limit fire. In North America, several studies have evaluated the effects of society on fire activity; however, most studies have been regional or subcontinental in scope and used different data and methods, thereby making continent-wide comparisons difficult. We circumvent these challenges by investigating the broad-scale impact of humans on fire activity using parallel statistical models of fire probability from 1984 to 2014 as a function of climate, enduring features (topography and percent nonfuel), lightning, and three indices of human activity (population density, an integrated metric of human activity [Human Footprint Index], and a measure of remoteness [roadless volume]) across equally spaced regions of the United States and Canada. Through a statistical control approach, whereby we account for the effect of other explanatory variables, we found evidence of non-negligible human-wildfire association across the entire continent, even in the most sparsely populated areas. A surprisingly coherent negative relationship between fire activity and humans was observed across the United States and Canada: fire probability generally diminishes with increasing human influence. Intriguing exceptions to this relationship are the continent's least disturbed areas, where fewer humans equate to less fire. These remote areas, however, also often have lower lightning densities, leading us to believe that they may be ignition limited at the spatiotemporal scale of the study. Our results suggest that there are few purely natural fire regimes in North America today. Consequently, projections of future fire activity should consider human impacts on fire regimes to ensure sound adaptation and mitigation measures in fire-prone areas.
Advances in machine learning have changed many fields of study and it has also drawn attention in a variety of remote sensing applications. In particular, deep convolutional neural networks (CNNs) have proven very useful in fields such as image recognition; however, the use of CNNs in large-scale remote sensing landcover classifications still needs further investigation. We set out to test CNN-based landcover classification against a more conventional XGBoost shallow learning algorithm for mapping a notoriously difficult group of landcover classes, wetland class as defined by the Canadian Wetland Classification System. We developed two wetland inventory style products for a large (397,958 km2) area in the Boreal Forest region of Alberta, Canada, using Sentinel-1, Sentinel-2, and ALOS DEM data acquired in Google Earth Engine. We then tested the accuracy of these two products against three validation data sets (two photo-interpreted and one field). The CNN-generated wetland product proved to be more accurate than the shallow learning XGBoost wetland product by 5%. The overall accuracy of the CNN product was 80.2% with a mean F1-score of 0.58. We believe that CNNs are better able to capture natural complexities within wetland classes, and thus may be very useful for complex landcover classifications. Overall, this CNN framework shows great promise for generating large-scale wetland inventory data and may prove useful for other landcover mapping applications.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.