Proceedings of the 17th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining 2011
DOI: 10.1145/2020408.2020520
|View full text |Cite
|
Sign up to set email alerts
|

Multi-source domain adaptation and its application to early detection of fatigue

Abstract: We consider the characterization of muscle fatigue through noninvasive sensing mechanism such as surface electromyography (SEMG). While changes in the properties of SEMG signals with respect to muscle fatigue have been reported in the literature, the large variation in these signals across different individuals makes the task of modeling and classification of SEMG signals challenging. Indeed, the variation in SEMG parameters from subject to subject creates differences in the data distribution. In this paper, w… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
54
0

Year Published

2012
2012
2016
2016

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 33 publications
(54 citation statements)
references
References 19 publications
0
54
0
Order By: Relevance
“…Another area of literature inconsistencies is in characterizing the transfer learning process with respect to the availability of labeled and unlabeled data. For example, Daumé [22] and Chattopadhyay [14] define supervised transfer learning as the case of having abundant labeled source data and limited labeled target data, and semi-supervised transfer learning as the case of abundant labeled source data and no labeled target data. In Gong [42] and Blitzer [5], semi-supervised transfer learning is the case of having abundant labeled source data and limited labeled target data, and unsupervised transfer learning is the case of abundant labeled source data and no labeled target data.…”
Section: Definitions Of Transfer Learningmentioning
confidence: 99%
See 1 more Smart Citation
“…Another area of literature inconsistencies is in characterizing the transfer learning process with respect to the availability of labeled and unlabeled data. For example, Daumé [22] and Chattopadhyay [14] define supervised transfer learning as the case of having abundant labeled source data and limited labeled target data, and semi-supervised transfer learning as the case of abundant labeled source data and no labeled target data. In Gong [42] and Blitzer [5], semi-supervised transfer learning is the case of having abundant labeled source data and limited labeled target data, and unsupervised transfer learning is the case of abundant labeled source data and no labeled target data.…”
Section: Definitions Of Transfer Learningmentioning
confidence: 99%
“…The paper by Chattopadhyay [14] proposes two separate solutions both using multiple labeled source domains. The first solution is the conditional probability based multisource domain adaptation (CP-MDA) approach, which is a domain adaptation process based on correcting the conditional distribution differences between the source and target domains.…”
Section: Instance-based Transfer Learningmentioning
confidence: 99%
“…Two recently proposed MSTL methods MDA [2] and LWE [9] both weigh each source domain based on the smoothness assumption that a source will gain a high weight if its predictions are smooth among data samples that are close in the feature space. MDA [2] computes a single weight for each source while LWE [9] computes a weight of each source on each sample.…”
Section: Imbalanced Distributionsmentioning
confidence: 99%
“…Thus, we should assign a low weight to domain s in Region R3. This assumption considers local weights for each source, which is different from the global weight assumption held by CRC [10], MDA [2] and GCM [8]. Although LWE [9] calculates local weights, it only considers unsupervised manifolds, which may lead to wrong predictions in negative transfer and imbalanced situations.…”
Section: Supervised Local Weight Schemementioning
confidence: 99%
“…For example, the work in [17] extends TrAdaboost [5] by adding a wrapper boosting framework on weighting each source domain. [3] presents a linear combination over multiple sources to reach a consensus. However, these approaches work under singleview setting.…”
Section: Related Workmentioning
confidence: 99%