2020
DOI: 10.1155/2020/4706576
|View full text |Cite
|
Sign up to set email alerts
|

A Full Stage Data Augmentation Method in Deep Convolutional Neural Network for Natural Image Classification

Abstract: Nowadays, deep learning has achieved remarkable results in many computer vision related tasks, among which the support of big data is essential. In this paper, we propose a full stage data augmentation framework to improve the accuracy of deep convolutional neural networks, which can also play the role of implicit model ensemble without introducing additional model training costs. Simultaneous data augmentation during training and testing stages can ensure network optimization and enhance its generalization ab… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
102
0
1

Year Published

2020
2020
2023
2023

Publication Types

Select...
8

Relationship

0
8

Authors

Journals

citations
Cited by 147 publications
(103 citation statements)
references
References 31 publications
0
102
0
1
Order By: Relevance
“…Antoniou et al 36 proposed to train a conditional GAN (DAGAN) to perform data augmentation. It is also worth mentioning that regularized deep learning [37][38][39][40][41] is an efficient and vital way to improve the generalization ability: the full stage data augmentation 37 plays the role of the implicit model ensemble without introducing additional model training costs; PReLU 38 is a new activation function to improve the classification performance with a fast convergence rate; LLb-SGD 40 is a simple gradient-based optimization method with computational efficiency; two-stage training method 39 could regularize the feature boundaries of deep networks from the point of view of data punishment so as to improve the generalization ability of the networks; drop-path 41 could reduce model parameters of deep networks and accelerate the network inference. On the other hand, MetaGAN 42 is a simplistic and versatile framework for improving the performance of few-shot learning models, based on the idea that generators generate fake samples that help the classifier to understand more explicit decision boundaries between different categories from several samples.…”
Section: Related Workmentioning
confidence: 99%
“…Antoniou et al 36 proposed to train a conditional GAN (DAGAN) to perform data augmentation. It is also worth mentioning that regularized deep learning [37][38][39][40][41] is an efficient and vital way to improve the generalization ability: the full stage data augmentation 37 plays the role of the implicit model ensemble without introducing additional model training costs; PReLU 38 is a new activation function to improve the classification performance with a fast convergence rate; LLb-SGD 40 is a simple gradient-based optimization method with computational efficiency; two-stage training method 39 could regularize the feature boundaries of deep networks from the point of view of data punishment so as to improve the generalization ability of the networks; drop-path 41 could reduce model parameters of deep networks and accelerate the network inference. On the other hand, MetaGAN 42 is a simplistic and versatile framework for improving the performance of few-shot learning models, based on the idea that generators generate fake samples that help the classifier to understand more explicit decision boundaries between different categories from several samples.…”
Section: Related Workmentioning
confidence: 99%
“…Moreover, aiming at accelerating the learning process, a pruning scheme is introduced in Reference [38], in which the model parameters of 2D deep CNN are reduced. By Zheng et al, 39 the performance of CNN models is improved using full‐stage information augmentation strategy. This results in an implicit model ensemble that does not require extra model training costs.…”
Section: Related Workmentioning
confidence: 99%
“…Data augmentation is a general technique to improve both the generalizability and convergence of neural networks, 10,17 enhancing the training and convergence of the neural networks, 17 since it tackles the qualitative and quantitative limitations of the employed data. 10,25,26 Data augmentation techniques can be loosely grouped into two categories: The more traditional approach 17,27 is based on methods such as image rotation, shearing, cropping, translation, color transformation, etc., whereas generative adversarial networks (GANs) 10,25,28 show great promise to generate ''fake'' images.…”
Section: Data Augmentation For Image Analysis By Neural Networkmentioning
confidence: 99%