2021
DOI: 10.1109/lra.2021.3062320
|View full text |Cite
|
Sign up to set email alerts
|

Toward Deep Generalization of Peripheral EMG-Based Human-Robot Interfacing: A Hybrid Explainable Solution for NeuroRobotic Systems

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
22
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
2
1

Relationship

2
5

Authors

Journals

citations
Cited by 41 publications
(22 citation statements)
references
References 23 publications
0
22
0
Order By: Relevance
“…The preliminary results on the three benchmark datasets revealed that the CNN feature extractor considerably improved the fuzzy classifier?s performance and interpretation ability. With the same intention, Gulati et al (2021) demonstrated a hybrid interpretable solution to identify the 17 types of gestures in a user-specific approach. A GradCAM method was also developed to optimize and the generalized model architecture to explain its predictions.…”
Section: Hybrid Interpretable Modelmentioning
confidence: 99%
“…The preliminary results on the three benchmark datasets revealed that the CNN feature extractor considerably improved the fuzzy classifier?s performance and interpretation ability. With the same intention, Gulati et al (2021) demonstrated a hybrid interpretable solution to identify the 17 types of gestures in a user-specific approach. A GradCAM method was also developed to optimize and the generalized model architecture to explain its predictions.…”
Section: Hybrid Interpretable Modelmentioning
confidence: 99%
“…The MLP model has 30 neurons on the hidden layer. We modified our recently proposed hybrid model [39] specifically for the identification problem. The hybrid model has a CNN module followed by an LSTM module.…”
Section: Comparative Studymentioning
confidence: 99%
“…Note: Gestures are from 1 to 40. Group A ( [9,10,11,12,13,14,16,17,18,19,20,21,34,35,36,38,39,40]) has 18 training gestures in Set 1 and Set 2. Group A c includes the complement gestures.…”
Section: Appendix a Model Optimizationmentioning
confidence: 99%
See 1 more Smart Citation
“…An RNN model can capture the underlying temporal dynamics from sEMG signals since each hidden cell comprises the information from all previous hidden cells and the observation of the current timestamp. Some recent articles (for example [18], [19]), including our previous work [20], [21], have proposed hybrid models that leverage the benefits of both CNNs and RNNs for motor intention detection using sEMG signals. In [20], we proposed a hybrid approach that achieves high performance on conventional user-specific and generalized gesture classi-fication, with reduced need for re-training and re-calibration.…”
Section: Introductionmentioning
confidence: 99%