2020
DOI: 10.1007/978-3-030-58565-5_38
|View full text |Cite
|
Sign up to set email alerts
|

NODIS: Neural Ordinary Differential Scene Understanding

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
5
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
4

Relationship

3
5

Authors

Journals

citations
Cited by 10 publications
(5 citation statements)
references
References 46 publications
0
5
0
Order By: Relevance
“…The performance of EDET in predicate classification is shown in Table 4 and Figure 9. [22] 58.5 65.2 67.1 NODIS [23] 58.9 66.0 67.9 VC-Tree [24] 59.8 66.2 67.9 GPS-Net [25] 60 The results show that EDET can generate excellent scene parsing in the scene graph predicate classification task. R@K means the recall rate of the top K prediction results.…”
Section: Predicate Classificationmentioning
confidence: 98%
“…The performance of EDET in predicate classification is shown in Table 4 and Figure 9. [22] 58.5 65.2 67.1 NODIS [23] 58.9 66.0 67.9 VC-Tree [24] 59.8 66.2 67.9 GPS-Net [25] 60 The results show that EDET can generate excellent scene parsing in the scene graph predicate classification task. R@K means the recall rate of the top K prediction results.…”
Section: Predicate Classificationmentioning
confidence: 98%
“…The applications include image retrieval [46], image captioning [1,45], VQA [51,25] and image generation [24,19]. In order to generate high-quality scene graphs from images, a series of works explore different directions such as utilizing spatial context [61,65,40], graph structure [60,58,34], optimization [8], reinforcement learning [36,51], semi-supervised training [7] or a contrastive loss [66]. These works have achieved excellent results on image datasets [29,42,31].…”
Section: Related Workmentioning
confidence: 99%
“…The source code is made publicly available on Github. Now many models [32], [33], [34], [35], [36], [37] are available to generate scene graphs from different perspectives, and some works even extend the scene graph generation task from images to videos [38], [39], [40], [41]. Two-stage methods following [2] are currently dominating scene graph generation: several works [9], [32], [42], [43] use residual neural networks with the global context to improve the quality of the generated scene graphs.…”
Section: Scene Graph Generationmentioning
confidence: 99%
“…Now many models [32], [33], [34], [35], [36], [37] are available to generate scene graphs from different perspectives, and some works even extend the scene graph generation task from images to videos [38], [39], [40], [41]. Two-stage methods following [2] are currently dominating scene graph generation: several works [9], [32], [42], [43] use residual neural networks with the global context to improve the quality of the generated scene graphs. Xu et al [42] use standard RNNs to iteratively improves the relationship prediction via message passing while MotifNet [9] stacks LSTMs to reason about the local and global context.…”
Section: Scene Graph Generationmentioning
confidence: 99%