2021
DOI: 10.1016/j.knosys.2021.107216
|View full text |Cite
|
Sign up to set email alerts
|

Deep transfer learning for conditional shift in regression

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
9
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 36 publications
(9 citation statements)
references
References 28 publications
0
9
0
Order By: Relevance
“…, MMD measures the discrepancy between the mean embeddings using a Hilbert-Schmidt norm as D MMD (D p , D q ) = μXp − μXq 2 HS . Inspired by the MMD, Liu et al [20], recently proposed the Conditional Embedding Operator Discrepancy (CEOD) for measuring the divergence between conditional distributions. The COED is based on empirical conditional embedding operators and is constructed as…”
Section: Conditional Embedding Operator Discrepancy (Ceod)mentioning
confidence: 99%
See 3 more Smart Citations
“…, MMD measures the discrepancy between the mean embeddings using a Hilbert-Schmidt norm as D MMD (D p , D q ) = μXp − μXq 2 HS . Inspired by the MMD, Liu et al [20], recently proposed the Conditional Embedding Operator Discrepancy (CEOD) for measuring the divergence between conditional distributions. The COED is based on empirical conditional embedding operators and is constructed as…”
Section: Conditional Embedding Operator Discrepancy (Ceod)mentioning
confidence: 99%
“…Hence, certain task-specific layers of the network are finetuned while others remain freezed with constant parameters. Commonly in computer vision, it is widely accepted that the convolutional layers are general, while fully-connected layers are taskspecific [20,35]. Adapting this concept in DNNs, we propose the fine-tuning of the fullyconnected network of the branch CNN {f l b 1 , .…”
Section: Target Deeponet Finetunementioning
confidence: 99%
See 2 more Smart Citations
“…The application of deep learning methods to plant classification has performed extremely well, and its excellent generalisation performance has achieved an important position in solving large-scale plant leaf classification problems. However, the disadvantage of using deep learning methods needs sufficient supervised learning samples during the training phase [ 9 ], because in most cases we only have a small number of training samples of a certain object to be recognized [ 10 12 ]. The general deep learning convolutional neural network performs extremely poorly with a small number of learning training samples, because the network under-fits during the training learning process if there are not enough training samples [ 13 ].…”
Section: Introductionmentioning
confidence: 99%