The use of imagery from small unmanned aircraft systems (sUAS) has enabled the production of more accurate data about the effects of wildland fire, enabling land managers to make more informed decisions. The ability to detect trees in hyperspatial imagery enables the calculation of canopy cover. A comparison of hyperspatial post-fire canopy cover and pre-fire canopy cover from sources such as the LANDFIRE project enables the calculation of tree mortality, which is a major indicator of burn severity. A mask region-based convolutional neural network was trained to classify trees as groups of pixels from a hyperspatial orthomosaic acquired with a small unmanned aircraft system. The tree classification is summarized at 30 m, resulting in a canopy cover raster. A post-fire canopy cover is then compared to LANDFIRE canopy cover preceding the fire, calculating how much the canopy was reduced due to the fire. Canopy reduction allows the mapping of burn severity while also identifying where surface, passive crown, and active crown fire occurred within the burn perimeter. Canopy cover mapped through this effort was lower than the LANDFIRE Canopy Cover product, which literature indicated is typically over reported. Assessment of canopy reduction mapping on a wildland fire reflects observations made both from ground truthing efforts as well as observations made of the associated hyperspatial sUAS orthomosaic.
Support vector machines are shown to be highly effective in mapping burn extent from hyperspatial imagery in grasslands. Unfortunately, this pixel-based method is hampered in forested environments that have experienced low-intensity fires because unburned tree crowns obstruct the view of the surface vegetation. This obstruction causes surface fires to be misclassified as unburned. To account for misclassifying areas under tree crowns, trees surrounded by surface burn can be assumed to have been burned underneath. This effort used a mask region-based convolutional neural network (MR-CNN) and support vector machine (SVM) to determine trees and burned pixels in a post-fire forest. The output classifications of the MR-CNN and SVM were used to identify tree crowns in the image surrounded by burned surface vegetation pixels. These classifications were also used to label the pixels under the tree as being within the fire’s extent. This approach results in higher burn extent mapping accuracy by eliminating burn extent false negatives from surface burns obscured by unburned tree crowns, achieving a nine percentage point increase in burn extent mapping accuracy.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.