2017
DOI: 10.1016/j.jmp.2017.05.006
|View full text |Cite
|
Sign up to set email alerts
|

A Tutorial on Fisher information

Abstract: In many statistical applications that concern mathematical psychologists, the concept of Fisher information plays an important role. In this tutorial we clarify the concept of Fisher information as it manifests itself across three different statistical paradigms. First, in the frequentist paradigm, Fisher information is used to construct hypothesis tests and confidence intervals using maximum likelihood estimators; second, in the Bayesian paradigm, Fisher information is used to define a default prior; lastly, … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
133
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
4
3
2

Relationship

1
8

Authors

Journals

citations
Cited by 214 publications
(133 citation statements)
references
References 96 publications
0
133
0
Order By: Relevance
“…As in previous work (Keshvari, van den Berg, & Ma, 2012; van den Berg, Awh, & Ma, 2014; van den Berg, Shin, Chou, George, & Ma, 2012), we define memory precision as Fisher information, J , which provides a lower bound on the variance of any unbiased estimator of the stimulus and is a common tool in the study of theoretical limits on stimulus coding and discrimination precision (Abbott & Dayan, 1999; Cover & Thomas, 2005; Ly, A., Marsman, Verhagen, Grasman, & Wagenmakers, 2015; Paradiso, 1988). It is monotonically related to κ through the relation Jfalse(κfalse)=κI1false(κfalse)I0false(κfalse) (Keshvari et al, 2012), where I 1 is the modified Bessel function of the first kind of order 1.…”
Section: Model Constructionmentioning
confidence: 99%
“…As in previous work (Keshvari, van den Berg, & Ma, 2012; van den Berg, Awh, & Ma, 2014; van den Berg, Shin, Chou, George, & Ma, 2012), we define memory precision as Fisher information, J , which provides a lower bound on the variance of any unbiased estimator of the stimulus and is a common tool in the study of theoretical limits on stimulus coding and discrimination precision (Abbott & Dayan, 1999; Cover & Thomas, 2005; Ly, A., Marsman, Verhagen, Grasman, & Wagenmakers, 2015; Paradiso, 1988). It is monotonically related to κ through the relation Jfalse(κfalse)=κI1false(κfalse)I0false(κfalse) (Keshvari et al, 2012), where I 1 is the modified Bessel function of the first kind of order 1.…”
Section: Model Constructionmentioning
confidence: 99%
“…174-179 and 289-292). This recommendation is not the prior derived from Jeffreys's rule based on the Fisher information (e.g., Ly et al, 2017), as discussed in Berger and Sun (2008). With = 1, = = = 0, thus, a uniform prior on , Jeffreys showed that the marginal posterior for is approximately proportional to h a (n, r | ), where…”
Section: Notation and Resultsmentioning
confidence: 98%
“…174–179 and 289–292). This recommendation is not the prior derived from Jeffreys's rule based on the Fisher information (e.g., LY et al , ), as discussed in BERGER and SUN (). With α =1, β = γ = δ =0, thus, a uniform prior on ρ , Jeffreys showed that the marginal posterior for ρ is approximately proportional to h a ( n , r | ρ ), where ha(n,r|ρ)=(1ρ2)n12(1ρr)32n2, represents the ρ ‐dependent part of the likelihood Equation with θ 0 =( μ 1 , μ 2 , σ 1 , σ 2 ) integrated out.…”
Section: Notation and Resultsmentioning
confidence: 99%
“…This depends on how a change in qt is mapped onto a change in kt, which is given by the Fisher information, the variance of the first derivative of the log-likelihood function with respect to qt. For the binomial distribution with n = 1 (i.e., the Bernoulli distribution), the Fisher information is (e.g., Ly et al, 2017):…”
Section: A "Toy" Model Examplementioning
confidence: 99%