2020
DOI: 10.3390/s20061626
|View full text |Cite
|
Sign up to set email alerts
|

Stochastic Selection of Activation Layers for Convolutional Neural Networks

Abstract: In recent years, the field of deep learning has achieved considerable success in pattern recognition, image segmentation, and many other classification fields. There are many studies and practical applications of deep learning on images, video, or text classification. Activation functions play a crucial role in discriminative capabilities of the deep neural networks and the design of new “static” or “dynamic” activation functions is an active area of research. The main difference between “static” and “dynamic”… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
32
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 36 publications
(32 citation statements)
references
References 43 publications
0
32
0
Order By: Relevance
“…The Random Forest model had the highest discrimination performance, but all models had sufficient capabilities with high AUC values. It has been reported that the accuracy of CNN models can be improved by combining them [ 30 – 32 ]. Although only one CNN model was used in this study, it may be possible to improve the accuracy further by combining multiple CNN models.…”
Section: Discussionmentioning
confidence: 99%
“…The Random Forest model had the highest discrimination performance, but all models had sufficient capabilities with high AUC values. It has been reported that the accuracy of CNN models can be improved by combining them [ 30 – 32 ]. Although only one CNN model was used in this study, it may be possible to improve the accuracy further by combining multiple CNN models.…”
Section: Discussionmentioning
confidence: 99%
“…As in [ 29 ], the deep learner for image segmentation is Deeplabv3+ [ 30 ]. Its predecessor, Deeplabv3 [ 31 ], uses atrous convolution [ 32 ], or dilated convolution, to repurpose pretrained networks in order to control the resolution of feature responses without adding extra parameters.…”
Section: System Descriptionmentioning
confidence: 99%
“…The encoder–decoder structure of DeepLabv3+ allows it to be built on top of a pretrained CNN, such ResNet50, the architecture used here (internal preliminary investigations showed that both ResNet101 and ResNet34 produced similar results to ResNet50). Following the methodology outlined in reference [ 29 ], all models for skin segmentation were trained on a dataset containing 2000 images using class weighting; the training parameters used in this work, however, are the following: batch size (30), learning rate (0.001), and max epoch (50), with data augmentation set to 30 epochs.…”
Section: System Descriptionmentioning
confidence: 99%
See 1 more Smart Citation
“…Maguolo et al [ 49 ] have proposed a weighted resampling based transfer learning algorithm that combines efficient data with the data labelled in its target domain, and then, this scheme integrates the learned classifiers with the integrated data. A novel model has been proposed by Nanni et al [ 50 ], which combines static and dynamic activation functions. This scheme replaces all activation layers of a Convolutional Neural Network (CNN).…”
Section: Introductionmentioning
confidence: 99%