2015
DOI: 10.1515/ijb-2015-0004
|View full text |Cite
|
Sign up to set email alerts
|

A Universal Approximate Cross-Validation Criterion for Regular Risk Functions

Abstract: Selection of estimators is an essential task in modeling. A general framework is that the estimators of a distribution are obtained by minimizing a function (the estimating function) and assessed using another function (the assessment function). A classical case is that both functions estimate an information risk (specifically cross-entropy); this corresponds to using maximum likelihood estimators and assessing them by Akaike information criterion (AIC). In more general cases, the assessment risk can be estima… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
18
0

Year Published

2016
2016
2023
2023

Publication Types

Select...
5

Relationship

1
4

Authors

Journals

citations
Cited by 9 publications
(18 citation statements)
references
References 30 publications
0
18
0
Order By: Relevance
“…Reported are the Akaike and Bayesian information criteria (AIC, BIC) and the universal approximate cross validation criterion (UACV) for dementia and death data conditional on longitudinal information. UACV differences of order 10 −1 , 10 −2 , and 10 −3 qualified as “large,” “moderate,” and “small,” respectively …”
Section: Resultsmentioning
confidence: 99%
“…Reported are the Akaike and Bayesian information criteria (AIC, BIC) and the universal approximate cross validation criterion (UACV) for dementia and death data conditional on longitudinal information. UACV differences of order 10 −1 , 10 −2 , and 10 −3 qualified as “large,” “moderate,” and “small,” respectively …”
Section: Resultsmentioning
confidence: 99%
“…This motivated the development of a new estimator of the Brier Score by approximated leave-one-out crossvalidation 16 which is valid and easy to compute on the estimation data.…”
Section: Discussionmentioning
confidence: 99%
“…Let m be the number of subjects on which the predictive accuracy is computed. It is shown 16 that the difference between 𝒟( A ( θ̂ A ) , B ( θ̂ B )) and Δ( A ( θ̂ A ) , B ( θ̂ B )) is asymptotically normal: m1/20.16667em[scriptD0.16667em(Afalse(θ^Afalse),Bfalse(θ^Bfalse))-normalΔ0.16667em(Afalse(θ^Afalse),Bfalse(θ^Bfalse))]scriptN0.16667emfalse(0,w2false) where w2 can be estimated by the empirical variance ŵ 2 of the difference of the simple estimators. With z u the u th quantile of a standard normal variable, the confidence interval is then [𝒟 ( A ( θ̂ A ) , B ( θ̂ B )) − z α /2 m −1/2 ŵ ; 𝒟 ( A ( θ̂ A ) , B ( θ̂ B )) + z α /2 m −1/2 ŵ ].…”
Section: Evaluation Of Predictive Accuracymentioning
confidence: 99%
See 2 more Smart Citations