2018 IEEE International Symposium on Circuits and Systems (ISCAS) 2018
DOI: 10.1109/iscas.2018.8351550
|View full text |Cite
|
Sign up to set email alerts
|

Deep CNNs for microscopic image classification by exploiting transfer learning and feature concatenation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
105
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
7
2
1

Relationship

1
9

Authors

Journals

citations
Cited by 225 publications
(106 citation statements)
references
References 33 publications
1
105
0
Order By: Relevance
“…The InceptionResNetV2 network [24] was based on the Inceptionbased network structure and residual connections. InceptionResNetV2 performs nearly identically to Inception architectures, but this architecture achieved significant acceleration in training using residual connections [25].…”
Section: Deep Learning and Pretrained Cnn Modelsmentioning
confidence: 99%
“…The InceptionResNetV2 network [24] was based on the Inceptionbased network structure and residual connections. InceptionResNetV2 performs nearly identically to Inception architectures, but this architecture achieved significant acceleration in training using residual connections [25].…”
Section: Deep Learning and Pretrained Cnn Modelsmentioning
confidence: 99%
“…In such a scenario, in order to leverage the power of CNNs and at the same time reduce the computational costs, transfer learning can be used 7,8 . In this approach, the CNN is initially pre-trained on a large and diverse generic image data set and then applied to a specific task 9 . There are several pre-trained neural networks that have won international competitions like VGGNet 10 , Resnet 11 , Nasnet 12 , Mobilenet 13 , Inception 14 and Xception 15 .…”
mentioning
confidence: 99%
“…In the Inception-ResNet block, multiple sized convolutional filters are combined by residual connections. The usage of residual connections not only avoids the degradation problem caused by deep structures but also reduces the training time [81]. The 35 × 35, 17 × 17 and 8 × 8 grid modules, known as Inception-A, Inception-B and Inception-C blocks, are used in the Inception-ResNet-v2 network.…”
Section: Deep Learning Architectures For Semantic Segmentation -Vgg16mentioning
confidence: 99%