Proceedings of the 3rd International Conference on Crowd Science and Engineering 2018
DOI: 10.1145/3265689.3265705
|View full text |Cite
|
Sign up to set email alerts
|

Deep Transfer Learning for Cross-domain Activity Recognition

Abstract: Human activity recognition plays an important role in people's daily life. However, it is o en expensive and time-consuming to acquire su cient labeled activity data. To solve this problem, transfer learning leverages the labeled samples from the source domain to annotate the target domain which has few or none labels. Unfortunately, when there are several source domains available, it is di cult to select the right source domains for transfer. e right source domain means that it has the most similar properties… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
74
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
5
4
1

Relationship

3
7

Authors

Journals

citations
Cited by 118 publications
(76 citation statements)
references
References 40 publications
2
74
0
Order By: Relevance
“…We also demonstrate that the learned representations from a di erent (but related) unlabeled data source can be successfully transferred to improve the performance of diverse tasks even in the case of semi-supervised learning. In terms of transfer learning, our approach also di ers signi cantly from some earlier a empts [44,69] that were concerned with features transferability from a fully-supervised model learned from inertial measurement units data, as our approach utilizes widely available smartphones and does not require labeled data. Finally, the proposed technique is also di erent from previously studied unsupervised pre-training methods such as autoencoders [37], restricted Boltzmann machines [55] and sparse coding [9] as we employ an end-to-end (self) supervised learning paradigm on multiple surrogate tasks to extract features.…”
Section: Determining Representational Similaritymentioning
confidence: 99%
“…We also demonstrate that the learned representations from a di erent (but related) unlabeled data source can be successfully transferred to improve the performance of diverse tasks even in the case of semi-supervised learning. In terms of transfer learning, our approach also di ers signi cantly from some earlier a empts [44,69] that were concerned with features transferability from a fully-supervised model learned from inertial measurement units data, as our approach utilizes widely available smartphones and does not require labeled data. Finally, the proposed technique is also di erent from previously studied unsupervised pre-training methods such as autoencoders [37], restricted Boltzmann machines [55] and sparse coding [9] as we employ an end-to-end (self) supervised learning paradigm on multiple surrogate tasks to extract features.…”
Section: Determining Representational Similaritymentioning
confidence: 99%
“…Recent studies have indicated that deep networks can learn more transferable features for domain adaptation [12], [13]. The latest advances have been achieved by embedding domain adaptation modules in the pipeline of deep feature learning to extract domain-invariant representations [14], [15], [16], [17], [18], [19].…”
Section: Introductionmentioning
confidence: 99%
“…In the work of Wang et al [33], the authors developed a DA method for different scenarios: adaptation between similar body parts on the same person, different body parts on the same person, and similar body parts on different people. Their method was evaluated with public datasets, as PAMAP2 and OPPORTUNITY, against six common alternatives performing on average better.…”
Section: Related Work a Domain Adaptationmentioning
confidence: 99%