2017
DOI: 10.1093/biomet/asw057
|View full text |Cite
|
Sign up to set email alerts
|

Principal weighted support vector machines for sufficient dimension reduction in binary classification

Abstract: SUMMARYSufficient dimension reduction is popular for reducing data dimensionality without stringent model assumptions. However, most existing methods may work poorly for binary classification. For example, sliced inverse regression (Li, 1991) can estimate at most one direction if the response is binary. In this paper we propose principal weighted support vector machines, a unified framework for linear and nonlinear sufficient dimension reduction in binary classification. Its asymptotic properties are studied, … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
34
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
8

Relationship

1
7

Authors

Journals

citations
Cited by 23 publications
(34 citation statements)
references
References 34 publications
0
34
0
Order By: Relevance
“…The idea of combining SVM with sufficient dimension reduction is not new. Since the first attempt by PSVM to adapt the soft-margin SVM [19], principal weighted SVM was considered in [25], principal minimax SVM was introduced in [26],…”
Section: Population Level Development Of Principal Least Squares Svmmentioning
confidence: 99%
“…The idea of combining SVM with sufficient dimension reduction is not new. Since the first attempt by PSVM to adapt the soft-margin SVM [19], principal weighted SVM was considered in [25], principal minimax SVM was introduced in [26],…”
Section: Population Level Development Of Principal Least Squares Svmmentioning
confidence: 99%
“…This looks similar to the adaptively weighted large margin classifier objective function proposed by Wu and Liu (2013) in the classification framework. We first solve (15) based on the quadratic programming problem suggested by the following Theorem and then use the minimizer ζ * to estimate ψ * = Σ −1/2 n ζ * which is the minimizer of (14). Also, note that is used to denote the elementwise multiplication of two vectors of the same size, i.e.…”
Section: Adaptively Weighted Principal Svmmentioning
confidence: 99%
“…Further to this, we also apply the adaptive weights to Principal L2 SVM (PL2SVM) which was proposed by Artemiou and Dong (2016) and it was demonstrated that it generally has better performance than PSVM. Finally, although the theoretical framework of our methodology is similar to the one by Shin et al (2017) who used weighted SVM to achieve dimension reduction on binary responses, we emphasize that there are important differences. First of all,…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…The idea of using SVM and different algorithms have since been expanded in a number of directions. Artemiou and Dong (2016) used LqSVM which ensures the uniqueness of the solution, Zhou and Zhu (2016) used a minimax variation for sparse SDR, Shin and Artemiou (2017) replaced the hinge loss with a logistic loss to achieve the desired result, Shin, Wu, Zhang, and Liu (2017) used weighted SVM approach for binary responses and Artemiou and Shu (2014) and Smallman and Artemiou (2017) focused on removing the bias due to imbalance. One of the most interesting variations of SVM was proposed by Marron, Todd, and Ahn (2007) and is known as Distance-Weighted Discrimination (DWD).…”
Section: Introductionmentioning
confidence: 99%