2019
DOI: 10.48550/arxiv.1912.07048
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Integral Mixability: a Tool for Efficient Online Aggregation of Functional and Probabilistic Forecasts

Abstract: In this paper we extend the setting of the online prediction with expert advice to function-valued forecasts. At each step of the online game several experts predict a function and the learner has to efficiently aggregate these functional forecasts into one a single forecast. We adapt basic mixable loss functions to compare functional predictions and prove that these "integral" expansions are also mixable. We call this phenomena integral mixability. As an application, we consider various loss functions for pre… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
3

Relationship

2
1

Authors

Journals

citations
Cited by 3 publications
(5 citation statements)
references
References 4 publications
0
5
0
Order By: Relevance
“…Now we discuss the learning theory with respect to the CRPS ( 22), which can be regarded as an integral based loss in terms of learning Korotin et al (2019).…”
Section: Crps Learning and Its Optimalitymentioning
confidence: 99%
See 2 more Smart Citations
“…Now we discuss the learning theory with respect to the CRPS ( 22), which can be regarded as an integral based loss in terms of learning Korotin et al (2019).…”
Section: Crps Learning and Its Optimalitymentioning
confidence: 99%
“…Unfortunately, the quantile loss is not exp-concave, thus an application of the EWA and EWAG does not guarantee optimal convergence properties by standard arguments from the previous section. In contrast, the CRPS is exp-concave for random variables with bounded support, see Korotin et al (2019). Thus, when considering learning algorithms with the structure (24) we receive that the corresponding EWA algorithms satisfy optimalities (8) and for gradient based algorithms additionally (9).…”
Section: Crps Learning and Its Optimalitymentioning
confidence: 99%
See 1 more Smart Citation
“…The definition ( 17) is a special case of this definition (up to a factor), where µ(u) = 1 b−a for u ∈ [a, b] and µ(u) = 0 otherwise. It can be proved that the function ( 18) is η-mixable for 0 < η ≤ 2 and η-exponentially concave for 0 < η ≤ 1 2 (see Korotin et al (2019)). The CRPS score measures the difference between the forecast F and a perfect forecast H(u − y) which puts all mass on the verification y.…”
Section: Aggregation Of Probability Forecastsmentioning
confidence: 99%
“…The definition ( 19) is a special case of this definition (up to a factor), where µ(u) = 1 b−a for u ∈ [a, b] and µ(u) = 0 otherwise. In can be proved that the function ( 20) is η-mixable for 0 < η ≤ 2 and η-exponentially concave for 0 < η ≤ 1 2 (see Korotin et al 2019). The CRPS score measures the difference between the forecast F and a perfect forecast H(u − y) which puts all mass on the verification y.…”
Section: Endformentioning
confidence: 99%