2005
DOI: 10.1016/j.csda.2003.11.021
|View full text |Cite
|
Sign up to set email alerts
|

Dimensionality reduction approach to multivariate prediction

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2

Citation Types

1
14
0

Year Published

2005
2005
2018
2018

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 25 publications
(15 citation statements)
references
References 18 publications
1
14
0
Order By: Relevance
“…), PLS produces the same result as RRR. Alternative methods can be constructed by tuning the objective function between var(Xu i ) and corr 2 (Xu i , y) [1]. However, these methods require an additional parameter and are thus not considered here.…”
Section: Partial Least Squares (Pls)mentioning
confidence: 99%
See 1 more Smart Citation
“…), PLS produces the same result as RRR. Alternative methods can be constructed by tuning the objective function between var(Xu i ) and corr 2 (Xu i , y) [1]. However, these methods require an additional parameter and are thus not considered here.…”
Section: Partial Least Squares (Pls)mentioning
confidence: 99%
“…SIMPLS maximizes (u T i X T y) 2 under the constraint that the projections Xu i are orthogonal to each other, and that u T i u i = 1. Since (u T i X T y) 2 = corr 2 (Xu i , y) var(Xu i ), PLS has therefore been considered as a mixture between RRR and PCR [1]. For sphericallydistributed input data (var(Xu i ) = const.…”
Section: Partial Least Squares (Pls)mentioning
confidence: 99%
“…In the past decades, CCA and its variants have been successfully used in many research areas such as facial expression recognition [3], image analysis [4], position estimation of robots [5], parameter estimation of posture [6], data regression analysis [7], image texture analysis [8], image retrieval [9], content based text mining [10] and asymptotic convergence of the functions [11]. Given a data set with two views X and Y , the goal of CCA is to seek a set of basis vector pairs which would maximize the correlation of the two views when been projected into lower dimensional space.…”
mentioning
confidence: 99%
“…Highly correlated inputs cause the problem of collinearity: model interpretation is misleading as the importance of an input in the model can be compensated by another input. Traditional methods for meeting these problems are pure input selection (Sparks et al, 1985), regularization or shrinking (Breiman and Friedman, 1997;Srivastava and Solanky, 2003), and subspace methods (Abraham and Merola, 2005). Shrinking means that the regression coefficients are constrained such that the unimportant inputs tend to have coefficient values close to zero.…”
Section: Introductionmentioning
confidence: 99%