2009
DOI: 10.3414/me0627
|View full text |Cite
|
Sign up to set email alerts
|

Intuitive and Axiomatic Arguments for Quantifying Diagnostic Test Performance in Units of Information

Abstract: Mutual information is the best single measure of the ability of a diagnostic test to discriminate among the possible disease states.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
22
0

Year Published

2012
2012
2022
2022

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 16 publications
(22 citation statements)
references
References 24 publications
0
22
0
Order By: Relevance
“…In the life sciences, Shannon entropy has been used to measure cellular diversity (12,13) and phylogenetic variation (14), and to model molecular interactions (15). This concept has previously been applied to singleanalyte laboratory testing by Rudolph (16)(17)(18) and, more recently, by Benish (19)(20)(21)(22)(23) and Vollmer (24). However, their approaches have not been widely adopted or disseminated and, in particular, have not been applied to NGS.…”
mentioning
confidence: 98%
“…In the life sciences, Shannon entropy has been used to measure cellular diversity (12,13) and phylogenetic variation (14), and to model molecular interactions (15). This concept has previously been applied to singleanalyte laboratory testing by Rudolph (16)(17)(18) and, more recently, by Benish (19)(20)(21)(22)(23) and Vollmer (24). However, their approaches have not been widely adopted or disseminated and, in particular, have not been applied to NGS.…”
mentioning
confidence: 98%
“…Since these possibilities are equal in both tests, the entropy of the disease is the same. When we have a look at [6].…”
Section: Application and Resultsmentioning
confidence: 99%
“…In this and subsequent sections, we explore how to quantify the information provided by diagnostic test results [13-16]. This exploration will necessitate a framework for handling uncertainty at the level of disease probability, and at the level of ranges instead of point estimates of disease probability.…”
Section: Information Theory: the Surprisalmentioning
confidence: 99%
“…The surprisal is defined mathematically as S(p)=log2p where S is the surprisal and p is the probability of an event occurring. For reasons of convention and practicality [16], the logarithm is taken with base 2, yielding surprisal in units of “bits”. Fig.…”
Section: Information Theory: the Surprisalmentioning
confidence: 99%
See 1 more Smart Citation