2016
DOI: 10.1109/tnnls.2015.2424721
|View full text |Cite
|
Sign up to set email alerts
|

Effective Discriminative Feature Selection With Nontrivial Solution

Abstract: Feature selection and feature transformation, the two main ways to reduce dimensionality, are often presented separately. In this paper, a feature selection method is proposed by combining the popular transformation-based dimensionality reduction method linear discriminant analysis (LDA) and sparsity regularization. We impose row sparsity on the transformation matrix of LDA through l2,1-norm regularization to achieve feature selection, and the resultant formulation optimizes for selecting the most discriminati… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
52
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 154 publications
(52 citation statements)
references
References 23 publications
0
52
0
Order By: Relevance
“…For a given Fmeasure r, the corresponding cost matrix C is fixed and thus W is the only variable in Eq. (14). Thus, we develop an iterative algorithm to solve this problem.…”
Section: B Optimization Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…For a given Fmeasure r, the corresponding cost matrix C is fixed and thus W is the only variable in Eq. (14). Thus, we develop an iterative algorithm to solve this problem.…”
Section: B Optimization Methodsmentioning
confidence: 99%
“…Algorithm 1 An iterative algorithm to solve the optimization problem in Eq. (14). Input: feature matrix X ∈ R d×n , label matrix Y ∈ R n×m and discretized F-measure value r. Output: projection matrix W ∈ R d×m .…”
Section: Convergence Analysismentioning
confidence: 99%
See 1 more Smart Citation
“…Existing feature selection algorithm can be categorized as supervised feature selection (on data with full class labels) [5]- [9], unsupervised feature selection (on data without class labels) [10]- [15], and semisupervised feature selection (on data with partial labels) [14], [16], [17]. Feature selection in unsupervised context is considered to be more difficult than the other two cases, since there is no target information available for training.…”
Section: Introductionmentioning
confidence: 99%
“…One of the prime requirements for effective classification is the selection of relevant features adept at capturing internal structure of the data [3]. In addition, an effective feature set should display high inter-class diversity and adequate robustness to variations such as illumination effects, rotation and translation.…”
Section: Introductionmentioning
confidence: 99%