2021 IEEE International Conference on Image Processing (ICIP) 2021
DOI: 10.1109/icip42928.2021.9506690
|View full text |Cite
|
Sign up to set email alerts
|

Interaction-GCN: A Graph Convolutional Network Based Framework for Social Interaction Recognition in Egocentric Videos

Abstract: In this paper we propose a new framework to categorize social interactions in egocentric videos, we named Interac-tionGCN. Our method extracts patterns of relational and nonrelational cues at the frame level and uses them to build a relational graph from which the interactional context at the frame level is estimated via a Graph Convolutional Network (GCN) based approach. Then it propagates this context over time, together with first-person motion information, through a Gated Recurrent Unit architecture. Ablat… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

1
0

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 24 publications
0
1
0
Order By: Relevance
“…activity recognition [23,21], social interaction analysis [1,11], video summarisation [8,27], etc. To the best of our knowledge, only conventional cameras have been used for action recognition in the context of egocentric vision, an exception being a very recent work [22], which introduces N-EPIC-Kitchens dataset, an event-based version of EPIC-Kitchens [7], a well-known dataset of egocentric videos captured with conventional wearable cameras.…”
Section: Related Workmentioning
confidence: 99%
“…activity recognition [23,21], social interaction analysis [1,11], video summarisation [8,27], etc. To the best of our knowledge, only conventional cameras have been used for action recognition in the context of egocentric vision, an exception being a very recent work [22], which introduces N-EPIC-Kitchens dataset, an event-based version of EPIC-Kitchens [7], a well-known dataset of egocentric videos captured with conventional wearable cameras.…”
Section: Related Workmentioning
confidence: 99%