2020
DOI: 10.1080/02664763.2019.1710478
|View full text |Cite
|
Sign up to set email alerts
|

An automatic robust Bayesian approach to principal component regression

Abstract: Principal component regression uses principal components as regressors. It is particularly useful in prediction settings with high-dimensional covariates. The existing literature treating of Bayesian approaches is relatively sparse. We introduce a Bayesian approach that is robust to outliers in both the dependent variable and the covariates. Outliers can be thought of as observations that are not in line with the general trend. The proposed approach automatically penalises these observations so that their impa… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
8
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6

Relationship

3
3

Authors

Journals

citations
Cited by 10 publications
(8 citation statements)
references
References 43 publications
0
8
0
Order By: Relevance
“…We thus expect the results in this paper to be applicable to models in which the parameters are independent but not identically distributed. In particular, they turn out to be useful in designing the RJ algorithm when the posterior distribution arises from a robust principal component regression, as shown in Gagnon et al (2017). This is explained by the fact that the posterior has a structure similar to that in (10).…”
Section: Optimal Implementation and Generalisationmentioning
confidence: 97%
See 1 more Smart Citation
“…We thus expect the results in this paper to be applicable to models in which the parameters are independent but not identically distributed. In particular, they turn out to be useful in designing the RJ algorithm when the posterior distribution arises from a robust principal component regression, as shown in Gagnon et al (2017). This is explained by the fact that the posterior has a structure similar to that in (10).…”
Section: Optimal Implementation and Generalisationmentioning
confidence: 97%
“…It is easy to imagine situations where it would be optimal to include some, but not all, components. This would result in posterior distributions for the various models with structures as that described above (see, e.g., the real data analysis in Gagnon et al (2017)). Note that, when p n has such a structure, it makes it easier for the RJ algorithm to explore the entire state space.…”
Section: Sampling Contextmentioning
confidence: 99%
“…A new technique emerged to gain robustness against outliers in parametric modelling: replace the traditional distribution assumption (which is a normal assumption in the problems previously studied) by a super-heavy-tailed distribution assumption (Desgagné, 2015;Desgagné and Gagnon, 2019;Gagnon, Desgagné and Bédard, 2020a;Gagnon, Bédard and Desgagné, 2021). The ra-tionale is that this latter assumption is more adapted to the eventual presence of outliers by giving higher probabilities to extreme values.…”
Section: Wholly-robust Linear Regressionmentioning
confidence: 99%
“…The approximation is thus expected to be accurate when there are no severe outliers among the covariate observations. One may instead use a robust version of the correlation matrix (see Gagnon, Bédard and Desgagné (2021) for such a robust alternative), but this is not investigated here for brevity. Whether or not a robust version of the covariate matrix is used is expected to have no impact on our comparison of the algorithms in the next section because they all use q k →k = ϕ( • ;x k ,Î −1 k ) and the matrixÎ k only appears in q k →k .…”
Section: Wholly-robust Linear Regressionmentioning
confidence: 99%
See 1 more Smart Citation