2021 IEEE International Conference on Multimedia and Expo (ICME) 2021
DOI: 10.1109/icme51207.2021.9428205
|View full text |Cite
|
Sign up to set email alerts
|

Meta-Learning Causal Feature Selection for Stable Prediction

Abstract: Conventional predictive models in machine learning are based on I.I.D. hypothesis between training and testing data. However, such a hypothesis is fragile in the real world, and the model minimizing empirical errors on training data does not perform well on testing data, which makes the prediction unstable. This instability can be found widely in domain generalization, active learning, and transfer learning, etc. In this paper, we propose a novel Meta-learning Causal Feature Selection (MCFS) model for general … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
2
1
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(2 citation statements)
references
References 18 publications
0
2
0
Order By: Relevance
“…[4], [12], [13], [118], [119], [120], [121], [122], [123], [124](Section 4.3) Optimization Methods (Section 5) Distributionally Robust Optimization (Section 5.1) [1], [125], [126], [127], [128], [129], [130], [131], [132], [133], [134](Section 5.1.1∼ 5.1.3) [135], [136], [137], [138] (Section 5.1.4) Invariance-Based Optimization (Section 5.2) [5], [139], [140](Section 5.2) In real scenarios where observations are made in the form of images or sentences instead of structured data, high-level abstract information needs to be extracted from low-level data [27], and a few existing works [34], [35], [36] propose to recover causal factorization through disentanglement.…”
Section: Causal Representation Learningmentioning
confidence: 99%
See 1 more Smart Citation
“…[4], [12], [13], [118], [119], [120], [121], [122], [123], [124](Section 4.3) Optimization Methods (Section 5) Distributionally Robust Optimization (Section 5.1) [1], [125], [126], [127], [128], [129], [130], [131], [132], [133], [134](Section 5.1.1∼ 5.1.3) [135], [136], [137], [138] (Section 5.1.4) Invariance-Based Optimization (Section 5.2) [5], [139], [140](Section 5.2) In real scenarios where observations are made in the form of images or sentences instead of structured data, high-level abstract information needs to be extracted from low-level data [27], and a few existing works [34], [35], [36] propose to recover causal factorization through disentanglement.…”
Section: Causal Representation Learningmentioning
confidence: 99%
“…Zhang et al [122] propose a Deconfounded Visio-Linguistic Bert framework to mitigate the potential data biases. And Yuan et al [123] propose to identify causal features with meta-learning mechanism for OOD generalization.…”
Section: Stable Learningmentioning
confidence: 99%