2022
DOI: 10.1109/tai.2022.3150264
|View full text |Cite
|
Sign up to set email alerts
|

Evaluation Methods and Measures for Causal Learning Algorithms

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
15
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 39 publications
(15 citation statements)
references
References 72 publications
0
15
0
Order By: Relevance
“…Currently, although there are a number of software packages for treatment effect estimation using observational data, e.g., Python library CausalML, EconML, DoWhy, software for validating the (C)ATEs obtained from observational dataset by comparison to RCT data are, to our best knowledge, non-existent. Earlier work by Wendling et al (2018), Alaa and Van Der Schaar (2019), Schuler et al (2017), Powers et al (2018), Franklin et al (2014), and Cheng et al (2022), and existing software packages such as the R package MethodEvaluation (Schuemie et al 2020), the Python package Causality-Benchmark (Shimoni et al 2018), and the Python package JustCause (Franz 2020), do approximate a data generation process for a given observational dataset, and use simulation methods for treatment effect validation. An overview of existing software for treatment effect estimation and validation is provided in table 1.…”
Section: Related Workmentioning
confidence: 99%
“…Currently, although there are a number of software packages for treatment effect estimation using observational data, e.g., Python library CausalML, EconML, DoWhy, software for validating the (C)ATEs obtained from observational dataset by comparison to RCT data are, to our best knowledge, non-existent. Earlier work by Wendling et al (2018), Alaa and Van Der Schaar (2019), Schuler et al (2017), Powers et al (2018), Franklin et al (2014), and Cheng et al (2022), and existing software packages such as the R package MethodEvaluation (Schuemie et al 2020), the Python package Causality-Benchmark (Shimoni et al 2018), and the Python package JustCause (Franz 2020), do approximate a data generation process for a given observational dataset, and use simulation methods for treatment effect validation. An overview of existing software for treatment effect estimation and validation is provided in table 1.…”
Section: Related Workmentioning
confidence: 99%
“…One of the biggest open problems in CausalML is the lack of public benchmark resources to train and evaluate causal models. Cheng et al [348] found that the reason for this lack of benchmarks is the difficulty of observing interventions in the real world, because the necessary experimental conditions in form of randomized con-trol trials (RCTs) are often expensive, unethical or time-consuming. In other words, collecting interventional data involves actively interacting with an environment (i.e.…”
Section: Lack Of Benchmarksmentioning
confidence: 99%
“…Causal inference plays a significant role in improving the correctness and interpretability of deep learning systems [ 46 , 47 , 48 ]. Researchers focus on using causal inference in the reinforcement learning community to mitigate estimation errors of off-policy evaluation in partially observable environments [ 49 , 50 , 51 ].…”
Section: Related Work: Smart Grids Communications In Cooperative Agen...mentioning
confidence: 99%