2019
DOI: 10.3390/e21100975
|View full text |Cite
|
Sign up to set email alerts
|

Information Theoretic Causal Effect Quantification

Abstract: Modelling causal relationships has become popular across various disciplines. Most common frameworks for causality are the Pearlian causal directed acyclic graphs (DAGs) and the Neyman-Rubin potential outcome framework. In this paper, we propose an information theoretic framework for causal effect quantification. To this end, we formulate a two step causal deduction procedure in the Pearl and Rubin frameworks and introduce its equivalent which uses information theoretic terms only. The first step of the proced… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
9
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 10 publications
(9 citation statements)
references
References 69 publications
(127 reference statements)
0
9
0
Order By: Relevance
“…The effect of a cause has been quantified using information theory (Wieczorek and Roth 2019), however, without considering learning in a Bayesian setting. Entropic causal inference (see (Compton et al 2021)) specifies circumstances where the causal direction between categorical variables can be determined from observational data under assumptions of limited entropy.…”
Section: Related Workmentioning
confidence: 99%
“…The effect of a cause has been quantified using information theory (Wieczorek and Roth 2019), however, without considering learning in a Bayesian setting. Entropic causal inference (see (Compton et al 2021)) specifies circumstances where the causal direction between categorical variables can be determined from observational data under assumptions of limited entropy.…”
Section: Related Workmentioning
confidence: 99%
“…Parents’ heights are potential causes of offspring heights, but not vice versa. This intuitively appealing idea leads to tests for predictive causality , such as classical Granger causality testing in time-series analysis and its nonparametric generalization to transfer entropy [ 19 ]. In these tests, X is defined as a predictive cause of Y if future values of Y are not conditionally independent of past and present values of X , given past and present values of Y itself.…”
Section: The Structure Of Explanations In Causal Bayesian Network (Bns)mentioning
confidence: 99%
“…Specifically, to predict how changing X (via an exogenous intervention or manipulation) would change the distribution of one of its descendants, Y , it is necessary to identify an adjustment set of other variables to condition upon [ 30 ]. Adjustment sets generalize the principle that one must condition on any common parents of X and Y to eliminate confounding biases, but must not condition on any common children to avoid introducing selection biases [ 19 ]. Appropriate adjustment sets can be computed from the DAG structure of a causal BN for both direct causal effects and total causal effects [ 30 ].…”
Section: The Structure Of Explanations In Causal Bayesian Network (Bns)mentioning
confidence: 99%
See 1 more Smart Citation
“…Our approach combines two ideas: First, we make use of the ability of probability trees to represent context-dependent relationships by representing the different causal hypothesis as sub-trees in a single large probability tree, thereby reducing the problem of causal induction to a simple inference problem in this larger probability tree, which can be solved using Bayes theorem; second, by combining the causal hypotheses in a single model, we can predict the information gain associated with each intervention in advance, and thus select the intervention which has the highest gain, thereby leading to a natural activelearning method for causal induction on probability trees 1 . a) Related work: Information theory has previously been used to quantify the causal effect between variables [17], or to specify circumstances where the causal orientation of categorical variables can be determined from observational data [18]. Information geometry was used to infer causal orientation from observational data, using assumptions on generative mechanisms [19], however both settings are different from the Bayesian learning problem considered here.…”
Section: Introductionmentioning
confidence: 99%