2020
DOI: 10.1109/lsp.2020.3037512
|View full text |Cite
|
Sign up to set email alerts
|

DMA Regularization: Enhancing Discriminability of Neural Networks by Decreasing the Minimal Angle

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
2
2

Relationship

0
7

Authors

Journals

citations
Cited by 7 publications
(5 citation statements)
references
References 22 publications
0
5
0
Order By: Relevance
“…Given that the angles between the feature vector and weight vectors contain abundant discriminative information [10,16,17] and adversarial attacks attack these angles, we propose a regularization term that directly encourages the weight-feature compactness, more specifically, by minimizing the angle between adversarial feature vector and the weight vector corresponding to the ground-truth label y. In addition, prior works [18] have argued strong connections between adversarial robustness and inter-class separability.…”
Section: Proposed Methodsmentioning
confidence: 99%
“…Given that the angles between the feature vector and weight vectors contain abundant discriminative information [10,16,17] and adversarial attacks attack these angles, we propose a regularization term that directly encourages the weight-feature compactness, more specifically, by minimizing the angle between adversarial feature vector and the weight vector corresponding to the ground-truth label y. In addition, prior works [18] have argued strong connections between adversarial robustness and inter-class separability.…”
Section: Proposed Methodsmentioning
confidence: 99%
“…To train our stacked generalization model, we try to reduce redundancy among multiple model predictions by applying diversity regularization (53) on the fully connected layer of MLP. This regularization ensures diverse weight vectors by maximizing the minimal pairwise angles between vectors.…”
Section: Overall Training Objectivementioning
confidence: 99%
“…Recently, Zhou et al [66] extend this line of work by learning towards the largest margin with a zero-centroid regularization. Hyperspherical uniformity can also be approximated by minimizing the maximum cosine similarity between pairs of class vectors [14,34,57]. In this work, we also start from hyperspherical uniformity and deviate from current literature on one important axis: hyperspherical uniformity does not require optimization.…”
Section: Related Workmentioning
confidence: 99%