2018
DOI: 10.1007/978-3-030-01246-5_40
|View full text |Cite
|
Sign up to set email alerts
|

Neural Graph Matching Networks for Fewshot 3D Action Recognition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
61
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 97 publications
(61 citation statements)
references
References 33 publications
0
61
0
Order By: Relevance
“…Moreover, the number of applicable directions of GNNs in computer vision is still growing. It includes human-object interaction [144], few-shot image classification [145], [146], [147], semantic segmentation [148], [149], visual reasoning [150], and question answering [151].…”
Section: Practical Applicationsmentioning
confidence: 99%
“…Moreover, the number of applicable directions of GNNs in computer vision is still growing. It includes human-object interaction [144], few-shot image classification [145], [146], [147], semantic segmentation [148], [149], visual reasoning [150], and question answering [151].…”
Section: Practical Applicationsmentioning
confidence: 99%
“…NCMN: Neural Graph Matching Network. In [43], a Neural Graph Matching Network (NGMN) is proposed for few-shot 3D action recognition, where 3D data are represented as interaction graphs. A GCN is applied for updating node features in the graphs and an MLP is employed for updating the edge strength.…”
Section: Gnn-based Graph Matching Networkmentioning
confidence: 99%
“…Since graph data usually has complex structure, how to learn a metric so that it can facilitate generalizing from a few graph examples is a big challenge. Some recent work [43] has begun to explore the few-shot 3D action recognition problem with graph-based similarity learning strategies, where a neural graph matching network is proposed to jointly learn a graph generator and a graph matching metric function to optimize the few-shot learning objective of 3D action recognition. However, since the objective is defined specifically based on the 3D action recognition task, the model can not be directly used for other domains.…”
Section: Few-shot Learningmentioning
confidence: 99%
“…For example, Liu et al [25] adopted unary fluents to represent attributes of a single object, and binary fluents for two objects in egocentric videos, and then they used LSTM [11] to recognize which action is performed. In addition, Recurrent Neural Networks (RNN) [16] or Graph Convolutional Networks (GCN) [12,18,31,50] is used for structured video representation and action recognition in 2D or 3D scenes. Due to the absence of rules for logical reasoning, the explainability of these methods is limited.…”
Section: Related Workmentioning
confidence: 99%
“…The popular two-stream convolutional networks [3,9,41,44] can capture the complementary information on appearance from still frames and motion between frames. Besides, spatio-temporal graphs with Recurrent Neural Networks (RNN) [16] or Graph Convolutional Networks (GCN) [12,18,31,50] focus on the structured video representation. Recently, with the advances of deep learning in scene graph representation [4,22,51], researchers attempt to use attributes of an object and the relationship between objects for semantic-level video content understanding.…”
Section: Introductionmentioning
confidence: 99%