2021
DOI: 10.1037/rev0000300
|View full text |Cite
|
Sign up to set email alerts
|

Information-theoretic signal detection theory.

Abstract: Signal detection theory (SDT), the standard mathematical framework by which we understand how stimuli are classified into distributions such as signal or noise, is an essential part of the modern psychologist's toolkit. This article introduces some mathematical tools derived from information theory which allow surprisingly simple approximations to key quantities in SDT. The main idea is a lower bound on the probability of correct classification of a stimulus, as a function of information-theoretic properties o… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
10
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 13 publications
(10 citation statements)
references
References 48 publications
0
10
0
Order By: Relevance
“…Here, p(f|f) denotes the probability of the ‘forward’ response in response to ‘forward’ presentation. By applying signal detection theory [16] to p(f|f) and p(r|r), we calculated the level of separation ( d′ ) and response bias (positive for the bias favouring the forward judgement) by using the formula as follows:dnormal′=norminvfalse(pfalse( f|ffalse)false)+norminvfalse(pfalse(r|rfalse)false)andbias=false(norminvfalse( p( ffalsefalse|f)false)norminvfalse( p(rfalsefalse|r)false)/2,where norminv denotes the normal inverse cumulative distribution function. We substituted 0.01 and 0.99 for 0 and 1, respectively, to prevent the values from diverging to infinity.…”
Section: Methodsmentioning
confidence: 99%
“…Here, p(f|f) denotes the probability of the ‘forward’ response in response to ‘forward’ presentation. By applying signal detection theory [16] to p(f|f) and p(r|r), we calculated the level of separation ( d′ ) and response bias (positive for the bias favouring the forward judgement) by using the formula as follows:dnormal′=norminvfalse(pfalse( f|ffalse)false)+norminvfalse(pfalse(r|rfalse)false)andbias=false(norminvfalse( p( ffalsefalse|f)false)norminvfalse( p(rfalsefalse|r)false)/2,where norminv denotes the normal inverse cumulative distribution function. We substituted 0.01 and 0.99 for 0 and 1, respectively, to prevent the values from diverging to infinity.…”
Section: Methodsmentioning
confidence: 99%
“…By definition, fidelity is maximal for the feature corresponding to the ideal observer and is less for other features, but again is only near 1 with a very well-separated mixture for which the overlap between components is negligible. The uncertainty that remains even for an ideal observer using all available features is the ineliminable uncertainty, attributable to the intrinsic overlap among the classes (see below and see Feldman, 2021a).…”
Section: Fidelity Of Representationmentioning
confidence: 99%
“…Even an ideal observer—that is, one who (a) possesses an accurate model of the environment and (b) uses it optimally to classify data—cannot necessarily answer this question accurately all the time. In most realistic mixtures, the component distributions overlap to some degree, meaning for that each datum there remains some ineliminable ambiguity about its origin (see Feldman, 2021a). However, a rational observer (including an ideal one) can divide the feature space into regions within which each hypothetical class has maximum-posterior probability.…”
Section: Mixturesmentioning
confidence: 99%
See 2 more Smart Citations