Lecture Notes in Computer Science
DOI: 10.1007/978-3-540-79547-6_40
|View full text |Cite
|
Sign up to set email alerts
|

A System That Learns to Tag Videos by Watching Youtube

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
48
0

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 26 publications
(48 citation statements)
references
References 14 publications
0
48
0
Order By: Relevance
“…The second one is an slightly modified version of the first one that uses sliding temporal windows of three seconds. The last method is called motion histograms and it is based on the work by Ulges [43]. Within [43] the motion vectors are extracted directly from the MPEG compressed domain by the XViD encoder.…”
Section: Motion Periodicitymentioning
confidence: 99%
See 1 more Smart Citation
“…The second one is an slightly modified version of the first one that uses sliding temporal windows of three seconds. The last method is called motion histograms and it is based on the work by Ulges [43]. Within [43] the motion vectors are extracted directly from the MPEG compressed domain by the XViD encoder.…”
Section: Motion Periodicitymentioning
confidence: 99%
“…The last method is called motion histograms and it is based on the work by Ulges [43]. Within [43] the motion vectors are extracted directly from the MPEG compressed domain by the XViD encoder. After that the area of the motion vectors is divided in 12 blocks and inside each of them an histogram of the 2D components for every vector within is generated.…”
Section: Motion Periodicitymentioning
confidence: 99%
“…Li et al propose a system of this type that is able to classify the scene, segment each object, and annotate the image with a list of tags [13]. Ulges et al apply a generative model to build a system for tag recommendation of videos [28]. The results corresponding to a certain query can be ranked according to some confidence scores associated to the words that form the query.…”
Section: Related Workmentioning
confidence: 99%
“…Some recent research as examined techniques for propagating preference information through a three-month snapshot of viewing data [2]. Other works have examined the unique properties of web videos [48] and the use of online videos as training data [44]. In contrast, we are interested in addressing the video annotation problem using a combination of visual feature analysis and a text model.…”
Section: The Web Collaborative Annotation and Youtubementioning
confidence: 99%