2001
DOI: 10.1081/sac-100002371
|View full text |Cite
|
Sign up to set email alerts
|

Variable Selection and Interpretation of Covariance Principal Components

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
26
0

Year Published

2005
2005
2023
2023

Publication Types

Select...
8
2

Relationship

1
9

Authors

Journals

citations
Cited by 48 publications
(27 citation statements)
references
References 9 publications
0
26
0
Order By: Relevance
“…This is because the original features are more straightforward to explain than the PCs, as PCs unfortunately require a linear combination of the whole initial matrix to account for themselves. So the selection of optimal individual features (representing the nine PCs) was performed instead using feature selection principles as outlined by Jolliffe (see Al-Kandari & Jolliffe, 2001). In other words, the highest correlating feature for each PC was chosen and highlighted in Table 2.…”
Section: Feature Reductionmentioning
confidence: 99%
“…This is because the original features are more straightforward to explain than the PCs, as PCs unfortunately require a linear combination of the whole initial matrix to account for themselves. So the selection of optimal individual features (representing the nine PCs) was performed instead using feature selection principles as outlined by Jolliffe (see Al-Kandari & Jolliffe, 2001). In other words, the highest correlating feature for each PC was chosen and highlighted in Table 2.…”
Section: Feature Reductionmentioning
confidence: 99%
“…Their aim is to eliminate possible incommensurability of subjects within the individual data sets (data scaling) and size diferences between data sets (coniguration scaling), see (Gower and Dijksterhuis 2004). Basically, translation, rotation, and dilation, which performed in the respected order, are the kinds of transformations that may be deemed desirable before embarking on the actual procrustes matching (Digby and Kempton 1987;Al Kandari and Jollife 2001;Bakhtiar and Siswadi 2011). PA can also be utilized to determine the goodness of it between a data matrix and its approximation .…”
Section: Introductionmentioning
confidence: 99%
“…This is because they are more straightforward to explain than the PCs, which are less so, as a linear combination of the whole initial matrix is needed to account for them. Therefore, the selection of optimal individual features representing the seven PCs was carried out using feature selection principles as outlined by Jolliffe (see Al-Kandari & Jolliffe, 2001). In other words, the highest correlating feature for each PC was chosen (these correlations are marked in bold in Table 3).…”
mentioning
confidence: 99%