2022
DOI: 10.3390/s22041416
|View full text |Cite
|
Sign up to set email alerts
|

Transfer Learning for Radio Frequency Machine Learning: A Taxonomy and Survey

Abstract: Transfer learning is a pervasive technology in computer vision and natural language processing fields, yielding exponential performance improvements by leveraging prior knowledge gained from data with different distributions. However, while recent works seek to mature machine learning and deep learning techniques in applications related to wireless communications, a field loosely termed radio frequency machine learning, few have demonstrated the use of transfer learning techniques for yielding performance gain… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
17
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
1

Relationship

2
5

Authors

Journals

citations
Cited by 27 publications
(17 citation statements)
references
References 42 publications
0
17
0
Order By: Relevance
“…Depending on assumptions between the source and target, the TL application can be categorized as homogeneous, where differences exist in the distributions between source and target, or heterogeneous, where the differences are in the feature space of the problem [13], [14]. A valuable discussion for understanding the concepts of homogeneous and heterogeneous is provided by Wong and Michaels [15] by explaining the change in distributions as a change of the dataset's collected/generated domain, while the feature space of the problem can be associated with the intended task the source model is trained on and can contrasted to the task of the target problem. The two most common types of TL include retraining the classification head where early layers are frozen during training preserving feature extraction, or fine-tuning of the whole model [16].…”
Section: Transfer Learningmentioning
confidence: 99%
See 3 more Smart Citations
“…Depending on assumptions between the source and target, the TL application can be categorized as homogeneous, where differences exist in the distributions between source and target, or heterogeneous, where the differences are in the feature space of the problem [13], [14]. A valuable discussion for understanding the concepts of homogeneous and heterogeneous is provided by Wong and Michaels [15] by explaining the change in distributions as a change of the dataset's collected/generated domain, while the feature space of the problem can be associated with the intended task the source model is trained on and can contrasted to the task of the target problem. The two most common types of TL include retraining the classification head where early layers are frozen during training preserving feature extraction, or fine-tuning of the whole model [16].…”
Section: Transfer Learningmentioning
confidence: 99%
“…The two most common types of TL include retraining the classification head where early layers are frozen during training preserving feature extraction, or fine-tuning of the whole model [16]. In this work, TL is applied in a homogeneous problem space where the underlying distributions on the data vary, but the generalized problem space is the same between datasets, otherwise called a Domain Adaptation from [15] and more specifically an Environment Platform Co-Adaptation. Additionally, wherever the retraining is done in this work, the fine-tuning approach is utilized, allowing for adjustments to the feature space, which might not be observable in the source dataset.…”
Section: Transfer Learningmentioning
confidence: 99%
See 2 more Smart Citations
“…Such algorithms that utilize raw radio frequency (RF) data as input to ML/DL techniques are considered radio frequency machine learning (RFML) algorithms [1], [2]. Like all traditional ML techniques, most state-of-the-art RFML algorithms require copious amounts of labelled training data drawn from the intended deployment environment, and for the intended deployment environment to remain stable, in order to achieve said state-of-the-art performance [3]. Therefore, recent works have identified transfer learning (TL) as a key research thrust for RFML which would enable developers to train highperforming RFML models quickly and with less labelled data, compared to standard training practices, by using prior knowledge learned on a source domain/task for a target domain/task [3], [4].…”
Section: Introductionmentioning
confidence: 99%