2019
DOI: 10.1016/j.cobme.2018.12.005
|View full text |Cite
|
Sign up to set email alerts
|

Cats or CAT scans: Transfer learning from natural or medical image source data sets?

Abstract: Transfer learning is a widely used strategy in medical image analysis. Instead of only training a network with a limited amount of data from the target task of interest, we can first train the network with other, potentially larger source datasets, creating a more robust model. The source datasets do not have to be related to the target task. For a classification task in lung CT images, we could use both head CT images, or images of cats, as the source. While head CT images appear more similar to lung CT image… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
34
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 49 publications
(35 citation statements)
references
References 33 publications
1
34
0
Order By: Relevance
“…Transfer learning is another technique that can help overcome the limitation of small labelled data sets. It has found particular application for transferring pretrained CNNs for image recognition tasks or for NIR spectroscopy calibration transfer across spectrometers [75,76]. A model trained on another system, for example a laboratory or pilot scale model system, can be used to aid in the prediction of the state of the target system.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Transfer learning is another technique that can help overcome the limitation of small labelled data sets. It has found particular application for transferring pretrained CNNs for image recognition tasks or for NIR spectroscopy calibration transfer across spectrometers [75,76]. A model trained on another system, for example a laboratory or pilot scale model system, can be used to aid in the prediction of the state of the target system.…”
Section: Discussionmentioning
confidence: 99%
“…For example, the optimised signal processing, network weights, or ML hyperparameter values from the first system can be used as initial training values for the target system. Alternatively, the outputs of the previously trained model applied to the target system may be used as inputs to a second model [75].…”
Section: Discussionmentioning
confidence: 99%
“…We made our basis earlier that using the networks pre-trained on ImageNet might not provide us better feature representations that are supported by the research community. Veronika cheplygina [ 44 ] conducted a study to provide a stance on whether the use of ImageNet pre-training is useful in medical imaging studies or not. The conclusion of the said study was “it depends” suggesting that if the volume of the data is small then it’s better to use a pre-trained network rather than initializing the weights randomly, however, if the volume of data is enough then the network should be trained on medical images from scratch.…”
Section: Proposed Methodsmentioning
confidence: 99%
“…The second group consists of studies wherein some of the early layers of the pre-trained model on large-scale natural image dataset were frozen and their weights kept unchanged while the final layers were finetuned [29]. This practice is based on the fact that the early layer features are more generic (e.g., edges), whereas the later-layer features are more specific to a particular task or dataset [17].…”
Section: Transfer Learningmentioning
confidence: 99%