2020
DOI: 10.1007/978-3-030-60548-3_12
|View full text |Cite
|
Sign up to set email alerts
|

First U-Net Layers Contain More Domain Specific Information Than the Last Ones

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
28
0
1

Year Published

2021
2021
2022
2022

Publication Types

Select...
5
1
1

Relationship

1
6

Authors

Journals

citations
Cited by 22 publications
(30 citation statements)
references
References 17 publications
1
28
0
1
Order By: Relevance
“…The second most popular strategy to apply transfer learning was fine-tuning certain parameters in a pretrained CNN [ 34 , 127 , 128 , 129 , 130 , 131 , 132 , 133 , 134 , 135 , 136 , 137 , 138 , 139 , 140 , 141 , 142 , 143 , 144 , 145 , 146 ]. The remaining approaches first optimized a feature extractor (typically a CNN or a SVM), and then trained a separated model (SVMs [ 30 , 45 , 147 , 148 , 149 ], long short-term memory networks [ 150 , 151 ], clustering methods [ 148 , 152 ], random forests [ 70 , 153 ], multilayer perceptrons [ 154 ], logistic regression [ 148 ], elastic net [ 155 ], CNNs [ 156 ]).…”
Section: Resultsmentioning
confidence: 99%
See 3 more Smart Citations
“…The second most popular strategy to apply transfer learning was fine-tuning certain parameters in a pretrained CNN [ 34 , 127 , 128 , 129 , 130 , 131 , 132 , 133 , 134 , 135 , 136 , 137 , 138 , 139 , 140 , 141 , 142 , 143 , 144 , 145 , 146 ]. The remaining approaches first optimized a feature extractor (typically a CNN or a SVM), and then trained a separated model (SVMs [ 30 , 45 , 147 , 148 , 149 ], long short-term memory networks [ 150 , 151 ], clustering methods [ 148 , 152 ], random forests [ 70 , 153 ], multilayer perceptrons [ 154 ], logistic regression [ 148 ], elastic net [ 155 ], CNNs [ 156 ]).…”
Section: Resultsmentioning
confidence: 99%
“…Besides, as adapting pretrained CNNs to the target domain data requires, at least, replacing the last layer of these models, researchers have likely turn fine-tuning only this randomly-initialized layer into common practice, although we found no empirical studies that supported such practice. Four surveyed articles studied different fine-tuning strategies with CNNs pretrained on ImageNet [ 96 , 134 ] and medical images [ 129 , 130 ]. The approaches that utilized ImageNet-pretrained CNNs [ 96 , 134 ] reported that fine-tuning more layers led to higher accuracy, suggesting that the first layers of ImageNet-pretrained networks—that detect low-level image characteristics, such as corners and borders—may not be adequate for medical images.…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…It is assumed, that low-level features should be shared across domains, while high-level ones are more prone to domain shift and should be therefore fine-tuned. However, in a number of Domain Adaptation papers the presence of low-level domain shift is demonstrated [3,12,21].…”
Section: Introductionmentioning
confidence: 99%