2015
DOI: 10.1016/j.ijar.2015.05.007
|View full text |Cite
|
Sign up to set email alerts
|

Probabilism, entropies and strictly proper scoring rules

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
11
0

Year Published

2016
2016
2024
2024

Publication Types

Select...
7
1
1

Relationship

2
7

Authors

Journals

citations
Cited by 16 publications
(11 citation statements)
references
References 53 publications
0
11
0
Order By: Relevance
“…The entropy maximization is a well-known method for determination of prior probabilities [31][32][33][34][35][36][37]. This method was developed within statistical physics [38], and it reflects the second law of thermodynamics, i.e., the natural tendency of the entropy to increase in closed systems [38].…”
Section: A General Discussion On Entropy Maximization Versus Its Mini...mentioning
confidence: 99%
See 1 more Smart Citation
“…The entropy maximization is a well-known method for determination of prior probabilities [31][32][33][34][35][36][37]. This method was developed within statistical physics [38], and it reflects the second law of thermodynamics, i.e., the natural tendency of the entropy to increase in closed systems [38].…”
Section: A General Discussion On Entropy Maximization Versus Its Mini...mentioning
confidence: 99%
“…Likewise, one shows thatŜ(E) is a concave function of E: d 2Ŝ dE 2 ≤ 0; cf. (36). Now using (41, 39) we find…”
Section: Entropy Maximization For Risk-seeking Agentsmentioning
confidence: 99%
“…However, our analysis is unique in that it focuses on measuring a model's behavior in uncertainty quantification and takes a rigorous, decision-theoretic view of the problem. As a result, it works with a special family of risk functions (i.e., the strictly proper scoring rule) that measure a model's performance in uncertainty calibration, handles the existence of unknown domain via a minimax formulation, and derives the solution by using a generalized version of maximum entropy theorem for Bergman scores [27,42]. The form of the optimal solution we derived in ( 5) takes an intuitive form, and has already been used widely as a training objective in many uncertainty works that leverage adversarial training and generative modeling to detect OOD examples [30,31,46,50,51].…”
Section: Related Workmentioning
confidence: 99%
“…Pettigrew ( 2016b ) favours the quadratic Brier score, though his justification of PI1 considered a class of inaccuracy measures. Different classes of inaccuracy measures have appeared in the literature, often delineated by technical fruitfulness rather than philosophical considerations—e.g., ‘strictly proper’ inaccuracy measures are particularly conducive to proving the required theorems (Landes 2015 ). As yet, we are far from a consensus as to which functions are appropriate as inaccuracy measures.…”
Section: Consequences For Consequentialismmentioning
confidence: 99%