2020
DOI: 10.1007/s11036-020-01689-y
|View full text |Cite
|
Sign up to set email alerts
|

Enhancing Representation of Deep Features for Sensor-Based Activity Recognition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 12 publications
(7 citation statements)
references
References 29 publications
0
7
0
Order By: Relevance
“…They proposed a CNN-based approach to extract cross-domain knowledge of human activity features, aiming to capture the dissimilarities between similar activities. Li et al [17] extracted activity features from raw CNN training data and enhanced human activity features by combining the features obtained from inverse CNN data through processes such as de-pooling, de-rectification, and de-convolution. It is worth noting that deep neural network models like CNNs eliminate the need for additional feature extraction.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…They proposed a CNN-based approach to extract cross-domain knowledge of human activity features, aiming to capture the dissimilarities between similar activities. Li et al [17] extracted activity features from raw CNN training data and enhanced human activity features by combining the features obtained from inverse CNN data through processes such as de-pooling, de-rectification, and de-convolution. It is worth noting that deep neural network models like CNNs eliminate the need for additional feature extraction.…”
Section: Related Workmentioning
confidence: 99%
“…Wan et al [16] used a three-layer CNN structure and LSTM for action classification recognition, resulting in accuracies of 91.00% and 85.86%, respectively. Li et al [17] employed a three-layer CNN structure and a three-layer reverse CNN structure to extract features from motion data. Based on domain knowledge, they selected a CNN (three-layer)-LSTM (one-layer) structure as the deep learning classifier.…”
Section: Sensor-based Har With Base Modelmentioning
confidence: 99%
“…Generally, information fusion is divided into data-level fusion, feature-level fusion and decision-level fusion. Several references have shown that feature-level fusion achieves better performance than no fusion [32][33][34][35]. In addition, many decision-level fusion methods are proposed, such as simple voting, majority voting, weighted majority, and fusion score method [36,37].…”
Section: Information Fusionmentioning
confidence: 99%
“…Despite the feature-learning approaches, the recent development of deep learning techniques brought a leap in the HAR performance. Typical works evaluated upon the PAMAP2 include: Shaohua et al [54] adopted CNN framework for HAR achieved an average accuracy of 91.00%; Li et al [55] validate a CNN-LSTM framework with the dataset by achieving an average accuracy ranging from 96.97% to 97.37%.…”
Section: Comparison With Related Workmentioning
confidence: 99%