2021
DOI: 10.1002/ece3.7921
|View full text |Cite
|
Sign up to set email alerts
|

Predicting insect outbreaks using machine learning: A mountain pine beetle case study

Abstract: This is an open access article under the terms of the Creat ive Commo ns Attri bution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
7
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
8
1

Relationship

5
4

Authors

Journals

citations
Cited by 26 publications
(7 citation statements)
references
References 67 publications
0
7
0
Order By: Relevance
“…Additionally, we assumed no manipulation, or forced structure in construction of the BN, as to allow the entire structure to be learned from the training data set only. We then learned the BN parameters, and compared the prediction accuracy of the resulting BN with four predictive models: generalized linear model (GLM), naive Bayes (NB), boosted decision tree (or, gradient boosting machine (GBM)), k nearest neighbor (KNN), and neural network (NN) (Ramazi et al., 2021b). We used the area under the curve (AUC) (Hajian‐Tilaki, 2013) as the performance measure over the test data set.…”
Section: Methodsmentioning
confidence: 99%
“…Additionally, we assumed no manipulation, or forced structure in construction of the BN, as to allow the entire structure to be learned from the training data set only. We then learned the BN parameters, and compared the prediction accuracy of the resulting BN with four predictive models: generalized linear model (GLM), naive Bayes (NB), boosted decision tree (or, gradient boosting machine (GBM)), k nearest neighbor (KNN), and neural network (NN) (Ramazi et al., 2021b). We used the area under the curve (AUC) (Hajian‐Tilaki, 2013) as the performance measure over the test data set.…”
Section: Methodsmentioning
confidence: 99%
“…As this is a time-series prediction task, the division was done temporally , where the beginning “chunk” of the data is taken as the training and the remaining chunk as the test dataset (Ramazi et al. 2021b , a ). Moreover, to increase evaluation reliability, several training and test durations were considered.…”
Section: Methodsmentioning
confidence: 99%
“…To validate the submodel for angler traffic between localities and subbasins, we randomly split the app data into a training (fitting) and a testing (validation) dataset, each containing observations for half of the anglers respectively. Note that a random split is in line with our purpose of evaluating the model accuracy in predicting unreported trips, and hence, a temporal split used for evaluating the model accuracy in making future predictions is not needed (Ramazi, Kunegel-Lion, et al, 2021). We fitted our model to the training data and computed the mean yearly number of recorded angler trips for each origin, destination and origin-destination pair.…”
Section: Model Evaluationmentioning
confidence: 99%