2020
DOI: 10.1016/j.knosys.2020.106202
|View full text |Cite
|
Sign up to set email alerts
|

Supervised feature selection through Deep Neural Networks with pairwise connected structure

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
23
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 36 publications
(23 citation statements)
references
References 25 publications
0
23
0
Order By: Relevance
“…This also constrains the range of possible 3D coordinates for these keypoints between -1 and 1, allowing the final function of our generators to be a bounded activation function which helps improve adversarial learning [34]. Though feature selection [13,46] can be used to find an optimal amount of spatially independent segments to split a 2D pose into, for simplicity we split our pose up into two during training, the torso and legs. Therefore, two generators will be trained using our adversarial approach and one final generator will be trained using knowledge distillation.…”
Section: Methodsmentioning
confidence: 99%
“…This also constrains the range of possible 3D coordinates for these keypoints between -1 and 1, allowing the final function of our generators to be a bounded activation function which helps improve adversarial learning [34]. Though feature selection [13,46] can be used to find an optimal amount of spatially independent segments to split a 2D pose into, for simplicity we split our pose up into two during training, the torso and legs. Therefore, two generators will be trained using our adversarial approach and one final generator will be trained using knowledge distillation.…”
Section: Methodsmentioning
confidence: 99%
“…This research presented different feature selection approaches based on the RNN architecture: long shortterm memory (LSTM), bidirectional LSTM, and gated recurrent unit (GRU). Also, a deep neural network-based feature selection (NeuralFS) was presented in [20]. Another supervised feature selection approach based on developing the first layer in DNN has been presented in [21] and [6].…”
Section: Literature Reviewmentioning
confidence: 99%
“…[28] Random forest [8] Adaptive group lasso [15] Pairwise constraints-based method for feature selection [16] DNN and multilayer bi-directional long short-term memory [18] Deep CNN based on Pearson correlation coefficient [19] Deep learning method for feature selection based on the automatic modulation classification (AMC) scheme [20] Deep Neural network-based Feature Selection [22] Innovative dual-network architecture [30] Term frequency/inverse document frequency, report length, and a bag of words as feature engineering techniques, and LR with multinomial Naïve Bayes as a classifier [31] Naïve-Bayes machine learning model [32] MLP, XGBoost, and LR [1] Filter selection based on GA [33] MLP and LR (Continued)…”
Section: Literature Reviewmentioning
confidence: 99%
“…In this phase, the initial feature set is retained, and a new feature set, the main feature set, is generated using a wrapper-based feature selection strategy that calculates the relative importance of the features in the model and filters out those that are redundant or irrelevant. As linear models are unable to capture the nonlinear relationships between the features and the targets [33], both linear (LR) and nonlinear models (RF and Xgboost) are used to generate the new feature set. First, a backward feature elimination process based LR and t test are applied to select the linear features that have significant linear relationships with the prediction target, TP.…”
Section: Phasementioning
confidence: 99%