2015 International Joint Conference on Neural Networks (IJCNN) 2015
DOI: 10.1109/ijcnn.2015.7280626
|View full text |Cite
|
Sign up to set email alerts
|

Feature selection using Deep Neural Networks

Abstract: Feature descriptors involved in video processing are generally high dimensional in nature. Even though the extracted features are high dimensional, many a times the task at hand depends only on a small subset of these features. For example, if two actions like running and walking have to be identified, extracting features related to the leg movement of the person is enough. Since, this subset is not known apriori, we tend to use all the features, irrespective of the complexity of the task at hand. Selecting ta… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
48
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 56 publications
(48 citation statements)
references
References 16 publications
0
48
0
Order By: Relevance
“…Results are promising, both in terms of learning performance and biomarkers detection Extensions of the Ph-CNN architecture are addressing the testing of different tree distances, optimization of neighbours detection and of the number of Phylo-Conv layers. Further, different feature selection algorithms, either generic or DLspecific can be adopted [53,54,55]. Improvements are expected on the transfer learning and domain adaptation procedures, such as learning on synthetic data and testing on metagenomics, and applying to larger datasets.…”
Section: Discussionmentioning
confidence: 99%
“…Results are promising, both in terms of learning performance and biomarkers detection Extensions of the Ph-CNN architecture are addressing the testing of different tree distances, optimization of neighbours detection and of the number of Phylo-Conv layers. Further, different feature selection algorithms, either generic or DLspecific can be adopted [53,54,55]. Improvements are expected on the transfer learning and domain adaptation procedures, such as learning on synthetic data and testing on metagenomics, and applying to larger datasets.…”
Section: Discussionmentioning
confidence: 99%
“…As the network weights are directly used as the feature weights, it cannot handle situations where inputs have outliers or noise. Towards this end, Roy et al(2015) use the activation potentials contributed by each of the individual input dimensions, as the metric for feature selection. However, this work relies on the specific DNN structure and the ReLU activation function which might not be so suitable in many learning tasks.…”
Section: Feature Selection Methodsmentioning
confidence: 99%
“… RF(Random Forest), a tree-based feature selection method provided by scikit-learn package.  Roy et al (2015): a DNN-based feature selection method, reproduced according to the paper Parameter Settings. Model parameters are initialized with truncated normal distribution with a mean of 0 and standard deviation of 0.1.…”
Section: Embedded Methodsmentioning
confidence: 99%
“…Several methods have been proposed for feature selection using DNN, with the focus on reducing input dimensionality, such as sparse one-to-one, dropout feature ranking, and activation potential based (15)(16)(17). We used the activation potential based method because of its proven performance in reducing the number of features, to obviate application of another filtering method coupled with DNN, and its simplicity and intuitiveness for selecting the number of important variables.…”
Section: Deep Neural Network (Dnn)mentioning
confidence: 99%
“…We used the activation potential based method because of its proven performance in reducing the number of features, to obviate application of another filtering method coupled with DNN, and its simplicity and intuitiveness for selecting the number of important variables. Feature selection was performed according to the method proposed by Roy et.al (17). Briefly, we computed the activation potential of each input feature connected to each of the hidden nodes in the first layer before applying Relu.…”
Section: Deep Neural Network (Dnn)mentioning
confidence: 99%