2021
DOI: 10.1002/cpe.6347
|View full text |Cite
|
Sign up to set email alerts
|

Causality‐based online streaming feature selection

Abstract: Online streaming feature selection, as a well-known and effective preprocessing approach in machine learning, is an eternal topic. Amount of online streaming feature selection algorithms have achieved a great deal of success in classification and prediction tasks. However, most of these existing algorithms only concentrate on the relevance between features and labels, and neglect the causal relationships between them. Discovering the potential causal relationships between features and labels, that is, the Mark… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
5
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2

Relationship

1
5

Authors

Journals

citations
Cited by 8 publications
(5 citation statements)
references
References 42 publications
0
5
0
Order By: Relevance
“…Yu et al [58] formulated the causal feature selection problem with multiple datasets as a search problem and gave the upper and lower bounds of the invariant set, then proposed a multisource feature selection algorithm. Yang et al [55] proposed the concept of N-structures and then designed an MB discovering subroutine to integrate MB learning with N-structures to discover MB while distinguishing direct causes from direct effects. Yu et al [53] proposed a multilabel feature selection algorithm, multi-label feature selection to causal structure learning (M2LC), which learns the causal mechanism behind the data and is able to select causally informative features and visualize common features.…”
Section: Causality-aware Feature Learningmentioning
confidence: 99%
“…Yu et al [58] formulated the causal feature selection problem with multiple datasets as a search problem and gave the upper and lower bounds of the invariant set, then proposed a multisource feature selection algorithm. Yang et al [55] proposed the concept of N-structures and then designed an MB discovering subroutine to integrate MB learning with N-structures to discover MB while distinguishing direct causes from direct effects. Yu et al [53] proposed a multilabel feature selection algorithm, multi-label feature selection to causal structure learning (M2LC), which learns the causal mechanism behind the data and is able to select causally informative features and visualize common features.…”
Section: Causality-aware Feature Learningmentioning
confidence: 99%
“…At present, many scholars have conducted in-depth research on the flow feature selection. [25][26][27][28][29] Zhou et al 25 defined an adaptive density neighborhood relation without prior information based on neighborhood rough set, and proposed a new online streaming feature selection algorithm. Li et al 26 proposed a causality-based online streaming feature selection algorithm with neighborhood conditional mutual information.…”
Section: Introductionmentioning
confidence: 99%
“…[25][26][27][28][29] Zhou et al 25 defined an adaptive density neighborhood relation without prior information based on neighborhood rough set, and proposed a new online streaming feature selection algorithm. Li et al 26 proposed a causality-based online streaming feature selection algorithm with neighborhood conditional mutual information. However, these existing online streaming feature selection algorithms ignored the hierarchical relationship between classes, and cannot solve the problem of hierarchical classification.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Due to the powerful ability of uncovering the underlying structural knowledge about data generating processes that allows interventions and generalizes well across different tasks and environments, causal reasoning [36][37][38] offers a promising alternative to correlation-learning. Recently, causal reasoning has been attracted increasing attention in a myriad of high-impact domains of computer vision and machine learning, such as interpretable deep learning [39][40][41][42][43][44], causal feature selection [45][46][47][48][49][50][51][52][53][54][55][56][57], visual comprehension [58][59][60][61][62][63][64][65][66][67], visual robustness [68][69][70][71][72][73][74][75], visual question answering [76][77][78][79][80][81], and video understanding…”
Section: Introductionmentioning
confidence: 99%