2018
DOI: 10.1016/j.patcog.2018.02.006
|View full text |Cite
|
Sign up to set email alerts
|

A deeply supervised residual network for HEp-2 cell classification via cross-modal transfer learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
64
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
5
2
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 102 publications
(64 citation statements)
references
References 31 publications
0
64
0
Order By: Relevance
“…Furthermore, the results do not suggest that larger datasets necessarily lead to better results -the smallest and the largest datasets lead to the best performances equally often. Lei et al (2018) address HEp-2 cell classification in the ICPR 2016 challenge as the target task (Lovell et al, 2016). Compared models include a ResNet pretrained on ImageNet, and a ResNet pretrained on data from the earlier edition of the challenge, ICPR 2012 (Foggia et al, 2013).…”
Section: Comparisons Of Source Datasetsmentioning
confidence: 99%
See 1 more Smart Citation
“…Furthermore, the results do not suggest that larger datasets necessarily lead to better results -the smallest and the largest datasets lead to the best performances equally often. Lei et al (2018) address HEp-2 cell classification in the ICPR 2016 challenge as the target task (Lovell et al, 2016). Compared models include a ResNet pretrained on ImageNet, and a ResNet pretrained on data from the earlier edition of the challenge, ICPR 2012 (Foggia et al, 2013).…”
Section: Comparisons Of Source Datasetsmentioning
confidence: 99%
“…Similarity of datasets is often used to hypothesize about which source data will be best, but definitions of similarity differ. For example, Menegola et al (2017) discuss similarity in terms of visual similarities of the images, Lei et al (2018) discuss similarity in terms of feature representations. In computer vision, other definitions may be used -for example, Azizpour et al (2015) investigate transfer from ImageNet and Places to 15 other datasets, and define similarity in terms of the number and variety of the classes.…”
Section: Opportunities For Further Researchmentioning
confidence: 99%
“…The main advantage of transfer learning is the improvement of classifier accuracy and the acceleration of the learning process [33]. Previous studies in the literature have demonstrated that transfer learning also has the potential to reduce the problem of overfitting [34] [35]. Although the dataset is not the same, low-level features from the source dataset are generic features e.g.…”
Section: Pre-trained Dcnn Feature Extractorsmentioning
confidence: 99%
“…Transfer learning allows the use of labeled or, more importantly, unlabeled data to train an algorithm, and then train again on a small corpus of labeled data. For example, in any kind of image classification task, it is now commonly accepted to use a pre-trained model trained on a large labeled dataset such as Imagenet [15], and then again train that model on a smaller labeled dataset of the final task (known as fine-tuning) [16,17]. This is an example where a large corpus of labeled data is being used to augment the performance in a task where labeled data are scarce.…”
Section: Introductionmentioning
confidence: 99%