2023
DOI: 10.1098/rsif.2022.0659
|View full text |Cite
|
Sign up to set email alerts
|

Context-dependent representation of within- and between-model uncertainty: aggregating probabilistic predictions in infectious disease epidemiology

Abstract: Probabilistic predictions support public health planning and decision making, especially in infectious disease emergencies. Aggregating outputs from multiple models yields more robust predictions of outcomes and associated uncertainty. While the selection of an aggregation method can be guided by retrospective performance evaluations, this is not always possible. For example, if predictions are conditional on assumptions about how the future will unfold (e.g. possible interventions), these assumptions may neve… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
20
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
7
2

Relationship

1
8

Authors

Journals

citations
Cited by 19 publications
(20 citation statements)
references
References 77 publications
0
20
0
Order By: Relevance
“…Methods such as boosting, dimensionality reduction, and trimming can optimise bias‐variance trade‐offs (Wang et al., 2022). For example, trimming the tails (exterior) of the individual forecast distributions has been shown to increase confidence in the MME by reducing the variance of the individual model forecasts before being combined into an MME (Howerton et al., 2023; Zhao et al., 2022). Previous results showed that MMEs were more successful when their component model forecasts were overconfident (low variance) (Hagedorn et al., 2005; Wang et al., 2022; Weigel et al., 2008).…”
Section: Discussionmentioning
confidence: 99%
“…Methods such as boosting, dimensionality reduction, and trimming can optimise bias‐variance trade‐offs (Wang et al., 2022). For example, trimming the tails (exterior) of the individual forecast distributions has been shown to increase confidence in the MME by reducing the variance of the individual model forecasts before being combined into an MME (Howerton et al., 2023; Zhao et al., 2022). Previous results showed that MMEs were more successful when their component model forecasts were overconfident (low variance) (Hagedorn et al., 2005; Wang et al., 2022; Weigel et al., 2008).…”
Section: Discussionmentioning
confidence: 99%
“…To fit the ensemble, we used a linear opinion pool method (Jose et al., 2014; Stone, 1961) by taking the median probability assigned to predicted WNND case counts per hexagon and year across forecasts. This method of aggregation assumes that each forecast captures a possible outcome versus representing a noisy sample from a single distribution, thus maintaining between‐forecast uncertainty (Howerton et al., 2023).…”
Section: Methodsmentioning
confidence: 99%
“…We used each set of quantiles to create linear opinion pool ensembles (LOP), which use linear extrapolation between the given quantiles to estimate the cumulative distribution function in order to then randomly sample trajectories to aggregate, again with equal weight; and a quantile-average ensemble, which takes the median across the different models’ values at each quantile and time step. The LOP and quantile-average ensembles have both been used to produce ensemble projections across multiple epidemiological forecasts [21], [11], [16]. To assess the difference in uncertainty across the two ensembles, we compared the mean of the values at each quantile across all time points, outcomes and scenarios.…”
Section: Methodsmentioning
confidence: 99%
“…Ongoing work evaluating these efforts has focused on assessing the output of past and current ensemble modelling projects. This has included evaluating differing performance among individual models [17], [18], [19], and a variety of methods for creating ensembles from multiple models [11], [16], [20], [21].…”
Section: Introductionmentioning
confidence: 99%