2022
DOI: 10.3389/fnbot.2022.859610
|View full text |Cite
|
Sign up to set email alerts
|

Generative Adversarial Training for Supervised and Semi-supervised Learning

Abstract: Neural networks have played critical roles in many research fields. The recently proposed adversarial training (AT) can improve the generalization ability of neural networks by adding intentional perturbations in the training process, but sometimes still fail to generate worst-case perturbations, thus resulting in limited improvement. Instead of designing a specific smoothness function and seeking an approximate solution used in existing AT methods, we propose a new training methodology, named Generative AT (G… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
1

Relationship

2
4

Authors

Journals

citations
Cited by 8 publications
(6 citation statements)
references
References 23 publications
0
6
0
Order By: Relevance
“…Generative adversarial networks (GANs) are popular in computer vision [23], [24] and anomaly detection [25] because they can generate data and handle complex data distributions effectively. GANs have a lot of benefits, but they're hard to train [26].…”
Section: Related Workmentioning
confidence: 99%
“…Generative adversarial networks (GANs) are popular in computer vision [23], [24] and anomaly detection [25] because they can generate data and handle complex data distributions effectively. GANs have a lot of benefits, but they're hard to train [26].…”
Section: Related Workmentioning
confidence: 99%
“…As a kind of unsupervised learning paradigm, contrastive learning directly uses the data itself as supervised information to learn the feature expression of sample data and use it for downstream tasks without relying on manually annotated category label information. 58 The basic idea of contrastive learning is shown in Figure 3. Contrastive learning learns the feature representation of samples by comparing the training data with positive and negative samples in the feature space.…”
Section: Contrastive Distortion-level Learningmentioning
confidence: 99%
“…While these works can maintain high recognition accuracy, deploying a highperformance lightweight neural network requires additional efforts of manual tuning by designers, which limits their popularization. 27 Different from prior network pruning and tuning works, this paper is orthogonal to existing memory-save schemes for object detection tasks and it can be implemented as an add-on module to the existing solutions. Since the remaindered networks after pruning can be considered as an independent network for further swapping, which improves the reusability of each branch of the network.…”
Section: Efficient Dnn Memory Usagementioning
confidence: 99%
“…Howard et al 26 designed two super parameters to build a small, low latency model mobilenets, to match the requirements of embedded system visual applications. While these works can maintain high recognition accuracy, deploying a high‐performance lightweight neural network requires additional efforts of manual tuning by designers, which limits their popularization 27 …”
Section: Background and Related Workmentioning
confidence: 99%