Proceedings of the 19th ACM International Conference on Multimedia 2011
DOI: 10.1145/2072298.2071936
|View full text |Cite
|
Sign up to set email alerts
|

Mining concept relationship in temporal context for effective video annotation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2012
2012
2014
2014

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 10 publications
0
2
0
Order By: Relevance
“…This part has nothing to do with lowlevel features and thus leads to a pure text-based processing: (2) As for the spatial-view term , we treat shots individually and only concern the concept correlations within shots. To take the content of into account, we incorporate its visual features into the estimation of spatial concept correlation and generate a data-specific correlation for .…”
Section: Two-view Concept Correlation Based Video Annotation Refimentioning
confidence: 99%
See 1 more Smart Citation
“…This part has nothing to do with lowlevel features and thus leads to a pure text-based processing: (2) As for the spatial-view term , we treat shots individually and only concern the concept correlations within shots. To take the content of into account, we incorporate its visual features into the estimation of spatial concept correlation and generate a data-specific correlation for .…”
Section: Two-view Concept Correlation Based Video Annotation Refimentioning
confidence: 99%
“…Therefore, recent studies begin to utilize this concept correlation to improve the annotation performance [1]- [7]. Among them, Content-Based Concept Fusion (CBCF) is one of the most popular approaches, in which concept correlation is integrated into a context-based model [2], [3] and acts as a post-processing to refine the initial results provided by multiple individual concept detectors.…”
Section: Introductionmentioning
confidence: 99%