Global fire monitoring systems are crucial to study fire behaviour, fire regimes and their impact at the global scale. Although global fire products based on the use of Earth Observation satellites exist, most remote sensing products only partially cover the requirements for these analyses. These data do not provide information like fire size, fire spread speed, how fires may evolve and joint into single event, or the number of fire events for a given area. This high level of abstraction is very valuable; it makes it possible to characterize fires by types (either size, spread, behaviour, etc.). Here, we present and test a data mining work flow to create a global database of single fires that allows for the characterization of fire types and fire regimes worldwide. This work describes the data produced by a data mining process using MODIS burnt area product Collection 6 (MCD64A1). The entire product has been computed until the present and is available under the umbrella of the Global Wildfire Information System (GWIS).
SUMMARYForest fire propagation prediction is a crucial issue when fighting these hazards as efficiently as possible. Several propagation models have been developed and integrated in computer simulators. Such models require a set of input parameters that, in some cases, are difficult to know or even estimate precisely beforehand Therefore, a calibration technique based on genetic algorithm (GA) was introduced to reduce the uncertainty in input parameters values and improve the accuracy of the predictions. Such a technique requires the execution of a set of simulations and several iterations of the process to calibrate the values of the input parameters. To reduce the execution time of this calibration stage, an Message Passing Interface master/worker scheme was developed to distribute the simulations of one iteration among the worker processes. However, the execution time of each simulation varies drastically depending on the particular input parameters used, provoking a significant load imbalance. To overcome this imbalance and reduce execution time to operational requirements, core allocation policies have been developed. These policies are based on execution time estimation and classification of simulations according to the estimated execution time. Then, multicore capabilities of the current systems are applied to devote more resources (cores) to the longest simulations reducing the load imbalance. These simulations that are estimated as taking too long, even when many resources are devoted to them, require especial consideration. So, a generation time limit has been introduced, and three different strategies have been designed considering individuals that exceed the generation execution time limit. In the first one, the longest individuals are replaced before starting the execution with shorter individuals (Time Aware Core allocation with replacement). In the second one, these individuals are executed, but when the generation limit is reached, the individuals still executing are killed (Time Aware Core allocation without replacement). In the third one, all the individuals are executed normally, and when the generation time limit is reached, the GA is applied considering the individuals that have finished their executions, while the individuals still executing are allowed to continue running and are considered by the GA when they finish. The three strategies have been tested in real scenarios, and the results show these policies significantly improve the calibration accuracy within the superimposed deadlines.
Natural hazards are a challenge for the society. Scientific community efforts have been severely increased assessing tasks about prevention and damage mitigation. The most important points to minimize natural hazard damages are monitoring and prevention. This work focuses particularly on forest fires. This phenomenon depends on small-scale factors and fire behavior is strongly related to the local weather. Forest fire spread forecast is a complex task because of the scale of the phenomena, the input data uncertainty and time constraints in forest fire monitoring. Forest fire simulators have been improved, including some calibration techniques avoiding data uncertainty and taking into account complex factors as the atmosphere. Such techniques increase dramatically the computational cost in a context where the available time to provide a forecast is a hard constraint. Furthermore, an early mapping of the fire becomes crucial to assess it. In this work, a nonsupervised method for forest fire early detection and mapping is proposed. As main sources, the method uses daily thermal anomalies from MODIS and VIIRS combined with land cover map to identify and monitor forest fires with very few resources. This method relies on a clustering technique (DBSCAN algorithm) and on filtering thermal anomalies to detect the forest fires. In addition, a concave hull (alpha shape algorithm) is applied to obtain rapid mapping of the fire area (very coarse accuracy mapping). Therefore, the method leads to a potential use for high-resolution forest fire rapid mapping based on satellite imagery using the extent of each early fire detection. It shows the way to an automatic rapid mapping of the fire at high resolution processing as few data as possible.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.