2021
DOI: 10.48550/arxiv.2107.10483
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Efficient Neural Causal Discovery without Acyclicity Constraints

Abstract: Learning the structure of a causal graphical model using both observational and interventional data is a fundamental problem in many scientific fields. A promising direction is continuous optimization for score-based methods, which efficiently learn the causal graph in a data-driven manner. However, to date, those methods require constrained optimization to enforce acyclicity or lack convergence guarantees. In this paper, we present ENCO, an efficient structure learning method for directed, acyclic causal grap… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(7 citation statements)
references
References 21 publications
0
7
0
Order By: Relevance
“…We also compared to an all-absent model corresponding to a zero adjacency matrix, which acts as a sanity check baseline. We also considered other methods (Chickering, 2002;Hauser & Bühlmann, 2012;Zhang et al, 2012;Gamella & Heinze-Deml, 2020), but only presented a comparison with non-linear ICP and DAG-GNN as these have shown to be strong performing models in other works (Ke et al, 2020a;Lippe et al, 2021;Scherrer et al, 2021). For Section 6.3, we also compared to additional baselines from Chickering (2002); Hauser & Bühlmann (2012); Zheng et al (2018); Gamella & Heinze-Deml (2020).…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…We also compared to an all-absent model corresponding to a zero adjacency matrix, which acts as a sanity check baseline. We also considered other methods (Chickering, 2002;Hauser & Bühlmann, 2012;Zhang et al, 2012;Gamella & Heinze-Deml, 2020), but only presented a comparison with non-linear ICP and DAG-GNN as these have shown to be strong performing models in other works (Ke et al, 2020a;Lippe et al, 2021;Scherrer et al, 2021). For Section 6.3, we also compared to additional baselines from Chickering (2002); Hauser & Bühlmann (2012); Zheng et al (2018); Gamella & Heinze-Deml (2020).…”
Section: Methodsmentioning
confidence: 99%
“…Although this could in principle yield cycles in the graph, in practice we observed strong performance regardless. Nevertheless, one could likely improve the results using post-processing (Lippe et al, 2021) or by extending the method with an accept-reject algorithm (Castelletti & Mascaro, 2022;Li et al, 2022).…”
Section: Decoding the Adjacency Matrixmentioning
confidence: 99%
See 1 more Smart Citation
“…Due to the limitations of purely observational data, and extend the continuous optimization framework to make use of interventional data. Lippe et al (2021) scales in a concurrent work with ours the work of to higher dimensions by splitting structural edge parameters in separate orientation and likelihood parameters and leveraging it in an adapted gradient formulation with lower variance. In contrast to and our work, they require interventional data on every variable.…”
Section: Related Workmentioning
confidence: 99%
“…As a result, there has been a surge in interest in differentiable structure learning and the combination of deep learning and causal inference . Such methods define a structural causal model with smoothly differentiable parameters that are adjusted to fit observational data (Zheng et al, 2018;Yu et al, 2019;Zheng et al, 2020;Bengio et al, 2019;Lorch et al, 2021;Annadani et al, 2021), although some methods can accept interventional data, thereby significantly improving the identification of the underlying data-generating process Lippe et al, 2021). However, the improvement critically depends on the experiments and interventions available.…”
Section: Introductionmentioning
confidence: 99%