2021
DOI: 10.1109/jproc.2021.3058954
|View full text |Cite
|
Sign up to set email alerts
|

Toward Causal Representation Learning

Abstract: The two fields of machine learning and graphical causality arose and are developed separately. However, there is, now, cross-pollination and increasing interest in both fields to benefit from the advances of the other. In this article, we review fundamental concepts of causal inference and relate them to crucial open problems of machine learning, including transfer and generalization, thereby assaying how causality can contribute to modern machine learning research. This also applies in the opposite direction:… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

3
426
0
1

Year Published

2021
2021
2024
2024

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 614 publications
(430 citation statements)
references
References 103 publications
(136 reference statements)
3
426
0
1
Order By: Relevance
“…[69][70][71][72][73] However, understanding and disentangling their latent spaces remains challenging. 74,75 viAE wins Among the tested DNNs and across the multiple tests, the viAE provided the best face-shape representations to predict human behavior. With the notable exception of the generalization testing, the simple nonlinear pixelPCA model came close to this performance.…”
Section: Hypothesis-driven Research Using Generative Modelsmentioning
confidence: 96%
“…[69][70][71][72][73] However, understanding and disentangling their latent spaces remains challenging. 74,75 viAE wins Among the tested DNNs and across the multiple tests, the viAE provided the best face-shape representations to predict human behavior. With the notable exception of the generalization testing, the simple nonlinear pixelPCA model came close to this performance.…”
Section: Hypothesis-driven Research Using Generative Modelsmentioning
confidence: 96%
“…However, this teaching has been criticised as being detrimental to the potential understanding, which can be gained from techniques such as counterfactual explanations, a specific class of explanation that provides a link between what could have happened had input to a model been changed in a particular way [ 129 ]. Causal representation learning is a by-product of this research activity, and its applications have reached explainable CD [ 130 , 131 ].…”
Section: Challenges Comparisons and Future Directions For Change Representation Techniquesmentioning
confidence: 99%
“…This is the essential problem for artificial systems in emulating cognition in animals. However, there is recent work that employs artificial models of transfer learning [21,22].…”
Section: Abstract Encoding Of Sensory Inputmentioning
confidence: 99%