2020
DOI: 10.1016/j.knosys.2020.106258
|View full text |Cite
|
Sign up to set email alerts
|

Adversarial transfer learning for cross-domain visual recognition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
5
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 13 publications
(5 citation statements)
references
References 35 publications
0
5
0
Order By: Relevance
“…Transfer learning, whose core is to achieve the purpose of assisting the learning process in the target domain by finding the similarity between the source domain and target domain, has shown great effectiveness in many fields [54][55][56]. Transfer learning provides a fruitful approach for learning from existing knowledge in the original domain, applying it to new domain based on the similarity of the datasets.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Transfer learning, whose core is to achieve the purpose of assisting the learning process in the target domain by finding the similarity between the source domain and target domain, has shown great effectiveness in many fields [54][55][56]. Transfer learning provides a fruitful approach for learning from existing knowledge in the original domain, applying it to new domain based on the similarity of the datasets.…”
Section: Discussionmentioning
confidence: 99%
“…domain, has shown great effectiveness in many fields [54][55][56]. Transfer learning provides a fruitful approach for learning from existing knowledge in the original domain, applying it to new domain based on the similarity of the datasets.…”
Section: Discussionmentioning
confidence: 99%
“…Deepharmony (Dewey et al 2019) uses paired data to change the contrast of MRI from one scanner to another scanner with a modified U-Net. Generative Adversarial Networks (Huang et al 2018;Sankaranarayanan et al 2018;Lei et al 2019;Wang, Zhang, and Fu 2020) aim to generate new images to overcome the domain shift. Those methods modify the intensities of each pixel before training for the main task.…”
Section: Related Workmentioning
confidence: 99%
“…The main difficulties of this task are that machine learning technology is primarily adopted for the recognition of massive numbers of words, but the "cold environment" (lack of a learning corpus) of poetry makes it difficult to begin the learning task. To this end, transfer learning has emerged (Wang, Zhang, & Fu, 2020), which aims to migrate labeled data or knowledge structures from relevant domains to complete the learning task in the target domain; a typical example of this is Google's BERT model (Devlin, Chang, Lee, & Toutanova, 2018). Based on it, the BERT-BiLSTM-CRFs model has been effectively used in solving large-scale entity recognition in recent years (Song, Tian, & Yu, 2020).…”
Section: Introductionmentioning
confidence: 99%