Relative entropy is a concept within information theory that provides a measure of the distance between two probability distributions. The author proposes that the amount of information gained by performing a diagnostic test can be quantified by calculating the relative entropy between the posttest and pretest probability distributions. This statistic, in essence, quantifies the degree to which the results of a diagnostic test are likely to reduce our surprise upon ultimately learning a patient's diagnosis. A previously proposed measure of diagnostic information that is also based on information theory (pretest entropy minus posttest entropy) has been criticized as failing, in some cases, to agree with our intuitive concept of diagnostic information. The proposed formula passes the tests used to challenge this previous measure.
Summary Objectives: This paper demonstrates that diagnostic test performance can be quantified as the average amount of information the test result (R) provides about the disease state (D). Methods: A fundamental concept of information theory, mutual information, is directly applicable to this problem. This statistic quantifies the amount of information that one random variable contains about another random variable. Prior to performing a diagnostic test, R and D are random variables. Hence, their mutual information, I(D;R), is the amount of information that R provides about D. Results: I(D;R) is a function of both 1) the pretest probabilities of the disease state and 2) the set of conditional probabilities relating each possible test result to each possible disease state. The area under the receiver operating characteristic curve (AUC) is a popular measure of diagnostic test performance which, in contrast to I(D;R), is independent of the pretest probabilities; it is a function of only the set of conditional probabilities. The AUC is not a measure of diagnostic information. Conclusions: Because I(D;R) is dependent upon pretest probabilities, knowledge of the setting in which a diagnostic test is employed is a necessary condition for quantifying the amount of information it provides. Advantages of I(D;R) over the AUC are that it can be calculated without invoking an arbitrary curve fitting routine, it is applicable to situations in which multiple diagnoses are under consideration, and it quantifies test performance in meaningful units (bits of information).
Mutual information is the best single measure of the ability of a diagnostic test to discriminate among the possible disease states.
Summary Objectives: The purpose of this communication is to demonstrate the use of “information graphs” as a means of characterizing diagnostic test performance. Methods: Basic concepts in information theory allow us to quantify diagnostic uncertainty and diagnostic information. Given the probabilities of the diagnoses that can explain a patient’s condition, the entropy of that distribution is a measure of our uncertainty about the diagnosis. The relative entropy of the posttest probabilities with respect to the pretest probabilities quantifies the amount of information gained by diagnostic testing. Mutual information is the expected value of relative entropy and, hence, provides a measure of expected diagnostic information. These concepts are used to derive formulas for calculating diagnostic information as a function of pretest probability for a given pair of test operating characteristics. Results: Plots of diagnostic information as a function of pretest probability are constructed to evaluate and compare the performance of three tests commonly used in the diagnosis of coronary artery disease. The graphs illustrate the critical role that the pretest probability plays in determining diagnostic test information. Conclusions: Information graphs summarize diagnostic test performance and offer a way to evaluate and compare diagnostic tests.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.