2022
DOI: 10.3390/s22197324
|View full text |Cite
|
Sign up to set email alerts
|

Comparing Handcrafted Features and Deep Neural Representations for Domain Generalization in Human Activity Recognition

Abstract: Human Activity Recognition (HAR) has been studied extensively, yet current approaches are not capable of generalizing across different domains (i.e., subjects, devices, or datasets) with acceptable performance. This lack of generalization hinders the applicability of these models in real-world environments. As deep neural networks are becoming increasingly popular in recent work, there is a need for an explicit comparison between handcrafted and deep representations in Out-of-Distribution (OOD) settings. This … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
15
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2
1

Relationship

3
5

Authors

Journals

citations
Cited by 14 publications
(20 citation statements)
references
References 51 publications
0
15
0
Order By: Relevance
“…Deep features extracted by these SSL pre-trained models are able to capture discriminative histomorphometric features. Previous research 7 has shown that deep features have achieved higher performance than handcrafted features. Leveraging the domain-specific insights provided by handcrafted features and the superior discriminative capabilities of deep features together offers a promising avenue for enhancing performance in pathology image analysis.…”
Section: Description Of Purposementioning
confidence: 96%
“…Deep features extracted by these SSL pre-trained models are able to capture discriminative histomorphometric features. Previous research 7 has shown that deep features have achieved higher performance than handcrafted features. Leveraging the domain-specific insights provided by handcrafted features and the superior discriminative capabilities of deep features together offers a promising avenue for enhancing performance in pathology image analysis.…”
Section: Description Of Purposementioning
confidence: 96%
“…In contrast, deep learning requires deep neural representations. It is a very arresting academic work to compare the two through a new and rational approach [ 12 ]. The work analyzes both approaches in multiple domains utilizing homogenized public datasets, verifying that even though deep learning initially outperforms handcrafted features, the situation is reversed as the distance from the training distribution increases, which supports the hypothesis that handcrafted features may generalize better across specific domains.…”
Section: Overview Of the Contributionsmentioning
confidence: 99%
“…Despite the irreplaceable advantages of traditional feature-based machine learning suggested in Section 2.2 , deep learning is increasingly demonstrating its powerful adaptive capabilities. Besides [ 12 ], this Special Issue contains three more articles on deep learning [ 13 , 14 , 15 ], offering us multiple dimensions of thinking: The training of HAR models requires a large amount of annotated data corpus. Most current models are not robust when facing anonymized data from new users; meanwhile, capturing each new subject’s data is usually not possible.…”
Section: Overview Of the Contributionsmentioning
confidence: 99%
“…Nevertheless, several limitations have been identified upon deploying deep learning models, such as the convergence to solutions that rely on spurious correlations [ 12 ]. In our previous work, Bento et al [ 13 ] compared the effectiveness of Handcrafted (HC) features versus deep neural representations for DG in HAR. Our findings revealed that while deep learning models initially outperformed those based on HC features, this trend was reversed as the distance from the training distribution increased, creating a gap between these methods in the OOD regime.…”
Section: Introductionmentioning
confidence: 99%
“…Our work attempts to bridge this gap by using regularization, which primarily focuses on mitigating overfitting, consequently leading to improved generalization performance [ 14 , 15 ]. For that purpose, several regularization methods are compared by following a methodology introduced in Bento et al [ 13 ], leveraging five public datasets that are homogenized, so that they can be arranged in different combinations, creating multiple OOD settings.…”
Section: Introductionmentioning
confidence: 99%