2020 IEEE International Radar Conference (RADAR) 2020
DOI: 10.1109/radar42522.2020.9114643
|View full text |Cite
|
Sign up to set email alerts
|

Transfer Learning from Audio Deep Learning Models for Micro-Doppler Activity Recognition

Abstract: This paper presents a mechanism to transform radio micro-Doppler signatures into a pseudo-audio representation, which results in significant improvements in transfer learning from a deep learning model trained on audio. We also demonstrate that transfer learning from a deep learning model trained on audio is more effective than transfer learning from a model trained on images, which suggests machine learning methods used to analyse audio can be leveraged for micro-Doppler. Finally, we utilise an occlusion meth… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
6
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 11 publications
(6 citation statements)
references
References 14 publications
0
6
0
Order By: Relevance
“…In [8], a ResNet based pretrained CNN on ImageNet database, is fine-tuned on time-Doppler spectrograms. The authors of [30] proposed a generative adversarial based image-to-image translation approach to transform time-Doppler signatures into a pseudo-audio representation, and fine-tuned a pre-trained VGGish CNN to classify the obtained representations.…”
Section: Related Workmentioning
confidence: 99%
“…In [8], a ResNet based pretrained CNN on ImageNet database, is fine-tuned on time-Doppler spectrograms. The authors of [30] proposed a generative adversarial based image-to-image translation approach to transform time-Doppler signatures into a pseudo-audio representation, and fine-tuned a pre-trained VGGish CNN to classify the obtained representations.…”
Section: Related Workmentioning
confidence: 99%
“…While relatively simple CNNs consisting of only three to five convolutional layers have been shown to be successful, deeper and more complex network approaches have also been investigated [26]. Furthermore, more advanced deep learning approaches have also been taken to compensate for limited training data, such as generating data using generative adversarial networks and transfer learning [22,30].…”
Section: Related Workmentioning
confidence: 99%
“…While we observe that the specific model we used outperforms on this data set and generalizes well, this is not necessarily the case for larger networks or different data sets. For example, in [30], the authors find that the SVM of the spectrogram outperforms much deeper networks like VGG-16. Additionally, unlike CNNs, features defined on a specific manifold through the kernel function provide a clear advantage in being able to more precisely obtain invariance to well defined nuisance transformations known a priori, based on knowledge of the application and data.…”
Section: Related Workmentioning
confidence: 99%
“…At its simplest, the first n convolutional layers and their weights from the feature extraction part of an existing model are copied to the first n layers of a new model for a related or similar task, with the remaining layers either re‐initialized with randomized weights or replaced (e.g., Razavian et al., 2014; Yosinski et al., 2014). These tasks need not be near‐identical or even superficially related, as long as low‐level data characteristics are shared between tasks (e.g., Efremova et al., 2019; Tran et al., 2020; Zamir et al., 2018). The intuition is that generalized knowledge of data structure and properties from one model trained with abundant labeled data (or “big data”) can guide a learning algorithm toward a good solution for a new task with far more limited, or even no, labeled data.…”
Section: Introductionmentioning
confidence: 99%