2019
DOI: 10.1109/tcsvt.2018.2884941
|View full text |Cite
|
Sign up to set email alerts
|

IR Feature Embedded BOF Indexing Method for Near-Duplicate Video Retrieval

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
11
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 22 publications
(11 citation statements)
references
References 40 publications
0
11
0
Order By: Relevance
“…It is calculated by two-dimension discrete RT. This approach outperform the state of the art in near duplicate videos [21].…”
Section: Related Workmentioning
confidence: 91%
“…It is calculated by two-dimension discrete RT. This approach outperform the state of the art in near duplicate videos [21].…”
Section: Related Workmentioning
confidence: 91%
“…Video-level retrieval methods encode the videos in video-level, and search for the k-nearest neighbors for the video-level feature of the query video in the embedding space. Various frame feature aggregation methods [3,5,18,19,20,21] have been used to obtain a single video-level representation. Liong et al [22] propose temporal pooling layer to aggregate the successive frames by the means of average pooling.…”
Section: Video-level Retrieval Methodsmentioning
confidence: 99%
“…For efficient video similarity measurement. Most retrieval methods have a straightforward motivation aggregating the local frame-level features into clip-level or even video-level representations, such as global vectors [15,25], hash codes [8,9,19], Bag-of-Words (BoW) [2,14,16], and video similarity is measured by distances of aggregated representations. However, the aggregated representations are too coarse to cover abundant fine-grained information and can't be used to partial segment localization.…”
Section: Related Workmentioning
confidence: 99%