2021
DOI: 10.1109/lra.2021.3059624
|View full text |Cite
|
Sign up to set email alerts
|

Multi-GAT: A Graphical Attention-Based Hierarchical Multimodal Representation Learning Approach for Human Activity Recognition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
18
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
2
2

Relationship

1
8

Authors

Journals

citations
Cited by 61 publications
(18 citation statements)
references
References 45 publications
0
18
0
Order By: Relevance
“…Moreover, the addition of the attention mechanism makes the model interpretable to a certain degree. Due to these advantages of the GATs model, many graph neural networks models have been proposed that are based on GATs 21,22 or other specific types of attention mechanisms. [23][24][25]…”
Section: Preliminariesmentioning
confidence: 99%
“…Moreover, the addition of the attention mechanism makes the model interpretable to a certain degree. Due to these advantages of the GATs model, many graph neural networks models have been proposed that are based on GATs 21,22 or other specific types of attention mechanisms. [23][24][25]…”
Section: Preliminariesmentioning
confidence: 99%
“…R OBOTS are started to be widely employed in various environments, including automating warehouses [1]- [4], controlling traffic [5], robot-guided evacuation [6], and human-robot interaction [7]- [9]. In many of these environments, multiple robots are deployed forming a multi-agent system, and each agent interacts with other agent or humans to complete an assigned task [10]- [13].…”
Section: Introductionmentioning
confidence: 99%
“…There have been impressive advances activity recognition frameworks tailored for robotics applications [1], [2], [3], [4], [5], [6], [7], [8]. However, this task remains very challenging in practice, as agents mostly operate in an open constantly changing environment and we will never be able to capture and annotate a high amount of training examples for every possible category [9], which is a requirement in the majority of presented approaches.…”
Section: Introductionmentioning
confidence: 99%