2022
DOI: 10.1007/s10489-022-03183-2
|View full text |Cite
|
Sign up to set email alerts
|

Joint rescaled asymmetric least squared nonparallel support vector machine with a stochastic quasi-Newton based algorithm

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
5
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 8 publications
(5 citation statements)
references
References 43 publications
0
5
0
Order By: Relevance
“…Terefore, the efect of zero mean feature noise is weakened, and the fnal separating hyperplane of the CaENSVM is stable for resampling. Along with τ decreasing (close to 0), by (38), the fnal separating hyperplane is gradually dominated by the instances in S w 4 . As a result, the classifcation results are signifcantly disturbed by the zero mean feature noise around the decision boundary.…”
Section: Resampling Stability To Feature Noisementioning
confidence: 98%
See 2 more Smart Citations
“…Terefore, the efect of zero mean feature noise is weakened, and the fnal separating hyperplane of the CaENSVM is stable for resampling. Along with τ decreasing (close to 0), by (38), the fnal separating hyperplane is gradually dominated by the instances in S w 4 . As a result, the classifcation results are signifcantly disturbed by the zero mean feature noise around the decision boundary.…”
Section: Resampling Stability To Feature Noisementioning
confidence: 98%
“…With parameters τ and θ properly selected, equation (40) indicates that τ controls the sensitivity of the CaENSVM to feature noise. In fact, by (38), ((1 − θ) − θ(1 − y i w T x i )) and (θ(1 − y i w T x i ) + (1 − θ)) are both positive, which means that a large τ (close to 1) can well balance the size of S w 2 and S w 4 for zero mean feature noise. Terefore, the efect of zero mean feature noise is weakened, and the fnal separating hyperplane of the CaENSVM is stable for resampling.…”
Section: Resampling Stability To Feature Noisementioning
confidence: 99%
See 1 more Smart Citation
“…However, ℓ 0norm is non-convex, solving the optimization with ℓ 0 -norm is an NP-hard problem. Therefore, some approximate computational methods [11][12][13][14] have been proposed. These methods integrate approximate ℓ 0 -norm function into the classifier to find the appropriate feature subset and get a better prediction accuracy.…”
Section: Introductionmentioning
confidence: 99%
“…The hinge loss is sensitive to outliers, while pinball loss is not robust enough to outliers due to its unboundedness. Since any convex loss is unbounded, many scholars have begun to focus on non-convex loss functions [11,12,20] and other types of loss functions [21][22][23], transforming the loss function to make it bounded. Scholars have conducted a lot of research on loss functions [22,24].…”
Section: Introductionmentioning
confidence: 99%