Proceedings of the 5th ACM on International Conference on Multimedia Retrieval 2015
DOI: 10.1145/2671188.2749399
|View full text |Cite
|
Sign up to set email alerts
|

Bridging the Ultimate Semantic Gap

Abstract: Semantic search in video is a novel and challenging problem in information and multimedia retrieval. Existing solutions are mainly limited to text matching, in which the query words are matched against the textual metadata generated by users. This paper presents a state-of-the-art system for event search without any textual metadata or example videos. The system relies on substantial video content understanding and allows for semantic search over a large collection of videos. The novelty and practicality is de… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
5
0

Year Published

2016
2016
2020
2020

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 50 publications
(5 citation statements)
references
References 40 publications
0
5
0
Order By: Relevance
“…To deal with out-of-vocabulary (OOV) problem, query expansion with ontology influencing [47] and webly-label learning by crawling online visual data [18] are commonly adopted. Despite numerous progress [8,19,35], automatic selection of concepts to capture query semantics as well as context remains highly difficult. Human intervention is often required in practice, for example, by manually removing undesired concepts after automatic matching [34] or by hand-picking query phrases that should be matched with concepts [45].…”
Section: Related Workmentioning
confidence: 99%
“…To deal with out-of-vocabulary (OOV) problem, query expansion with ontology influencing [47] and webly-label learning by crawling online visual data [18] are commonly adopted. Despite numerous progress [8,19,35], automatic selection of concepts to capture query semantics as well as context remains highly difficult. Human intervention is often required in practice, for example, by manually removing undesired concepts after automatic matching [34] or by hand-picking query phrases that should be matched with concepts [45].…”
Section: Related Workmentioning
confidence: 99%
“…In [39], the authors propose a state of art system search engine for video event search without any submitted example videos to the query which is called zero-example or 0Ex. The system consists of video semantic indexing component, semantic query generation component, multimedia search, and pseudo-relevance.…”
Section: Recent Approaches 1) Zero-shotmentioning
confidence: 99%
“…It orders the event concept vector scores in descending order, constructing an exponential curve, so that then selects the first k concepts. This paper uses a large concept pool that consists of 13.488 semantic concepts, the MAP of the proposed system is slightly better than the other systems such as [39]. A real-time video retrieval can be applied to this approach because the concept detection scores are previously measured to the database of videos and the semantic similarity computing is fast.…”
Section: Recent Approaches 1) Zero-shotmentioning
confidence: 99%
“…The use of automatic detectors means that all data is included in the search, whether the user has supplied tags or not. The search engine would enable the user to prioritize the various filters to sort the data [as in 70,141]. (Alternatively, the user could start with a similarity search, if they already had some relevant examples on hand.)…”
Section: The Proposed Dataset-building Processmentioning
confidence: 99%