2020
DOI: 10.1007/978-3-030-58604-1_8
|View full text |Cite
|
Sign up to set email alerts
|

Boundary Content Graph Neural Network for Temporal Action Proposal Generation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
51
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 117 publications
(53 citation statements)
references
References 25 publications
2
51
0
Order By: Relevance
“…Comparative analysis. We compare our work to all seminal efforts that evaluate on these two standard benchmarks (Zhao et al, 2017;Chao et al, 2018; Zeng et al, 2019;Xu et al, 2020;Bai et al, 2020;Chen et al, 2020;Su et al, 2021). As in previous efforts (Zeng et al, 2019;Chen et al, 2020;Xu et al, 2020), we perform an additional test when combining our work with the additional power of proposal-to-proposal relations from PGCN (Zeng et al, 2019) and the temporal aggregation from MUSES (Liu et al, 2021).…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Comparative analysis. We compare our work to all seminal efforts that evaluate on these two standard benchmarks (Zhao et al, 2017;Chao et al, 2018; Zeng et al, 2019;Xu et al, 2020;Bai et al, 2020;Chen et al, 2020;Su et al, 2021). As in previous efforts (Zeng et al, 2019;Chen et al, 2020;Xu et al, 2020), we perform an additional test when combining our work with the additional power of proposal-to-proposal relations from PGCN (Zeng et al, 2019) and the temporal aggregation from MUSES (Liu et al, 2021).…”
Section: Methodsmentioning
confidence: 99%
“…Instead, temporal action localization works aim to locate actions in untrimmed videos as well as classify them. Most works, like ours, investigate proposal generation and proposal evaluation (Zhao et al, 2017;Chao et al, 2018;Lin et al, 2018;Lin et al, 2019;Liu et al, 2019;Long et al, 2019;Xu et al, 2020;Bai et al, 2020;Chen et al, 2020;Su et al, 2021), but some just focus on proposal evaluation, such as (Zeng et al, 2019;Liu et al, 2021).…”
Section: Related Workmentioning
confidence: 99%
“…MGG [32] combines anchor-based methods and boundarybased methods to generate proposals. The works in [52,1] propose to use graph convolutional networks [22] to model temporal relationships in the input video. BMN [29] proposes a boundary-matching mechanism for the confidence evaluation of densely distributed proposals in an end-to-end pipeline.…”
Section: Related Workmentioning
confidence: 99%
“…The evaluation metric of temporal action detection is mAP, which calculates the Average Precision under multiple IoU thresholds for each action cate- We adopt the two-stage "detection by classifying proposals" temporal action detection framework to combine our proposals with action classifiers. For fair comparisons, following [31,29,52,1], on ActivityNet v1.3, we adopt top-1 video-level classification results generated by method [55] and use confidence scores of BMN proposals for detection results retrieving. On THUMOS14, following BMN [29], we also use both top-2 video-level classification results generated by UntrimmedNet [45].…”
Section: Action Detection With Our Proposalsmentioning
confidence: 99%
“…To demonstrate the effectiveness of the proposed MDN, we compare it with more than 10 state-of-the-art temporal action detection algorithms, inluding DBG (AAAI'20) [21], G-TAD (CVPR'20) [37], BC-GNN (ECCV'20) [1], BU-TAL (ECCV'20) [39], TSI (ACCV'20) [25], etc. For a fair comparison, we use the same video feature representation and post-processing step.…”
Section: Comparisons With the State-of-the-artsmentioning
confidence: 99%