Deep learning methods are successfully used in applications pertaining to ubiquitous computing, pervasive intelligence, health, and well-being. Speci cally, the area of human activity recognition (HAR) is primarily transformed by the convolutional and recurrent neural networks, thanks to their ability to learn semantic representations directly from raw input. However, in order to extract generalizable features massive amounts of well-curated data are required, which is a notoriously challenging task; hindered by privacy issues and annotation costs. erefore, unsupervised representation learning (i.e., learning without manually labeling the instances) is of prime importance to leverage the vast amount of unlabeled data produced by smart devices. In this work, we propose a novel self-supervised technique for feature learning from sensory data that does not require access to any form of semantic labels, i.e., activity classes. We learn a multi-task temporal convolutional network to recognize transformations applied on an input signal. By exploiting these transformations, we demonstrate that simple auxiliary tasks of the binary classi cation result in a strong supervisory signal for extracting useful features for the down-stream task. We extensively evaluate the proposed approach on several publicly available datasets for smartphone-based HAR in unsupervised, semi-supervised and transfer learning se ings. Our method achieves performance levels superior to or comparable with fully-supervised networks trained directly with activity labels, and it performs signi cantly be er than unsupervised learning through autoencoders. Notably, for the semi-supervised case, the self-supervised features substantially boost the detection rate by a aining a kappa score between 0.7 − 0.8 with only 10 labeled examples per class. We get similar impressive performance even if the features are transferred from a di erent data source. Self-supervision drastically reduces the requirement of labeled activity data, e ectively narrowing the gap between supervised and unsupervised techniques for learning meaningful representations. While this paper focuses on HAR as the application domain, the proposed approach is general and could be applied to a wide variety of problems in other areas.
000:2 • A. Saeed et al.activity recognition (HAR), 1D convolutional and recurrent neural networks trained on raw labeled signals signi cantly improve the detection rate over traditional methods [20,44,68,72,73]. Despite the recent advances in the eld of HAR, learning representations from a massive amount of unlabeled data still presents a signi cant challenge. Obtaining large, well-curated activity recognition datasets is problematic due to a number of issues. First, smartphone data are privacy sensitive, which makes it hard to collect su cient amounts of user-activity instances in a real-life se ing. Second, the annotation cost and the time it takes to generate a large volume of labeled instances are prohibitive. Finally, the diversity of devices, types of embedd...