2014
DOI: 10.1007/s10994-014-5466-8
|View full text |Cite
|
Sign up to set email alerts
|

Random projections as regularizers: learning a linear discriminant from fewer observations than dimensions

Abstract: Abstract. We prove theoretical guarantees for an averaging-ensemble of randomly projected Fisher Linear Discriminant classifiers, focusing on the case when there are fewer training observations than data dimensions. The specific form and simplicity of this ensemble permits a direct and much more detailed analysis than existing generic tools in previous works. In particular, we are able to derive the exact form of the generalization error of our ensemble, conditional on the training set, and based on this we gi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

3
57
0
1

Year Published

2015
2015
2020
2020

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 46 publications
(61 citation statements)
references
References 34 publications
3
57
0
1
Order By: Relevance
“…It uses all variables and correlations among variables in subspaces composed of randomly selected variables. The analysis was conducted in Matlab according to the algorithm described in Durrant and Kabán (2015). The same parameters as in Durrant and Kabán (2015) were also used.…”
Section: Classifiersmentioning
confidence: 99%
See 4 more Smart Citations
“…It uses all variables and correlations among variables in subspaces composed of randomly selected variables. The analysis was conducted in Matlab according to the algorithm described in Durrant and Kabán (2015). The same parameters as in Durrant and Kabán (2015) were also used.…”
Section: Classifiersmentioning
confidence: 99%
“…The analysis was conducted in Matlab according to the algorithm described in Durrant and Kabán (2015). The same parameters as in Durrant and Kabán (2015) were also used. The dimension of the subspaces, k, was set to (N -2) / 2 to optimize the classification, where N was the number of samples in the training dataset.…”
Section: Classifiersmentioning
confidence: 99%
See 3 more Smart Citations