2020
DOI: 10.48550/arxiv.2001.07416
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Deep Learning for Sensor-based Human Activity Recognition: Overview, Challenges and Opportunities

Kaixuan Chen,
Dalin Zhang,
Lina Yao
et al.

Abstract: The vast proliferation of sensor devices and Internet of Things enables the applications of sensor-based activity recognition. However, there exist substantial challenges that could influence the performance of the recognition system in practical scenarios. Recently, as deep learning has demonstrated its effectiveness in many areas, plenty of deep methods have been investigated to address the challenges in activity recognition. In this study, we present a survey of the state-of-the-art deep learning methods fo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
4
3

Relationship

1
6

Authors

Journals

citations
Cited by 10 publications
(9 citation statements)
references
References 128 publications
(164 reference statements)
0
9
0
Order By: Relevance
“…We evaluate VFDS on four different datasets: the UCI Human Activity Recognition (HAR) using Smartphones Dataset (Anguita et al, 2013), the OPPORTU-NITY Dataset (Roggen et al, 2010), the ExtraSensory dataset (Vaizman, Ellis, and Lanckriet, 2017), and the NTU-RGB-D dataset (Shahroudy et al, 2016). Al-though there are many other human activity recognition benchmark datasets (Chen et al, 2020), we choose the above datasets to better convey our message of achieving feature usage efficiency and interpretability using our dynamic feature selection framework with the following reasons. First, the UCI HAR dataset is a clean dataset with no missing values, allowing us to benchmark different methods without any discrepancies in data preprocessing confounding our evaluations.…”
Section: Methodsmentioning
confidence: 99%
“…We evaluate VFDS on four different datasets: the UCI Human Activity Recognition (HAR) using Smartphones Dataset (Anguita et al, 2013), the OPPORTU-NITY Dataset (Roggen et al, 2010), the ExtraSensory dataset (Vaizman, Ellis, and Lanckriet, 2017), and the NTU-RGB-D dataset (Shahroudy et al, 2016). Al-though there are many other human activity recognition benchmark datasets (Chen et al, 2020), we choose the above datasets to better convey our message of achieving feature usage efficiency and interpretability using our dynamic feature selection framework with the following reasons. First, the UCI HAR dataset is a clean dataset with no missing values, allowing us to benchmark different methods without any discrepancies in data preprocessing confounding our evaluations.…”
Section: Methodsmentioning
confidence: 99%
“…While existing surveys [2,[10][11][12][13] report past works in sensor-based HAR in general, we will focus in this survey on algorithms for human activity recognition in smart homes and its particular taxonomies and challenges for the ambient sensors, which we will develop in the next sections. Indeed, HAR in smart homes is a challenging problem because the human activity is something complex and variable from a resident to another.…”
Section: Key Contributionsmentioning
confidence: 99%
“…Attentional processes are also crucial in the global workspace theory (GWT) paradigm, (Baars, 1997;Franklin et al, 2014) to recruit content in the workspace. In this setting, the integration of deep-learning attention-model (e.g., transformer networks; Chen et al, 2021) for memory access and content retrieval is an interesting line of active research (Bengio, 2019;VanRullen & Kanai, 2021). In our framework, we focus on attention mechanisms for task orchestration, but analogous complementary attention-based methods could be investigated and integrated to enable context-dependent task set recruitment (from LTM to WM).…”
Section: Related Workmentioning
confidence: 99%