2017 IEEE International Conference on Multimedia and Expo (ICME) 2017
DOI: 10.1109/icme.2017.8019364
|View full text |Cite
|
Sign up to set email alerts
|

Crowdsourced time-sync video tagging using semantic association graph

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
11
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
4
2
1

Relationship

2
5

Authors

Journals

citations
Cited by 22 publications
(11 citation statements)
references
References 12 publications
0
11
0
Order By: Relevance
“…Intuitively, TSCs including video tags are usually within hot topics and impact on the trend of their follow-up TSCs. On the contrary, the noises usually neither have similar semantic relationships with other TSCs over a period nor influence other TSCs [58]. Moreover, we find that the density of TSCs (number of TSCs per unit time) affects how users communicate.…”
Section: Introductionmentioning
confidence: 80%
See 2 more Smart Citations
“…Intuitively, TSCs including video tags are usually within hot topics and impact on the trend of their follow-up TSCs. On the contrary, the noises usually neither have similar semantic relationships with other TSCs over a period nor influence other TSCs [58]. Moreover, we find that the density of TSCs (number of TSCs per unit time) affects how users communicate.…”
Section: Introductionmentioning
confidence: 80%
“…However, their approach does not consider the semantic association between TSCs, so that some of the video content-independent noises cannot be processed. In summary, TSCs have some features distinguished from the common comments [32,58], which make the above methods not very effective in the TSCs: (1) Semantic relevance. Abundant video semantic information is contained that describes both local and global video contents by selecting the time interval of the timestamp.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…The higher the neighbor similarity and the lower the keyframe similarity, the less likely the TSC is a spoiler. Considering the highnoise property, we implement Interactive Variance Attention (IVA) mechanism in the framework to effectively reduce the impact of noise due to the low semantic similarities between the noise and its surrounding TSCs [34]. Finally, we obtain the likelihood of the spoilers based on the difference between the neighbor and keyframe similarity.…”
Section: Keyframes Commentsmentioning
confidence: 99%
“…Wu et al [29] first formally introduce the TSC, and propose a preliminary method to extract video tags. Yang et al [34] systematically collate the features of TSCs and introduces a graphbased algorithm to eliminate the impact of noise prominently. Then further usages of TSCs data are proposed like extracting highlight shots [24,30], labeling important segments [19], detecting events [16], generating temporal descriptions of videos [31], and video recommendation [5,23,32].…”
Section: Related Workmentioning
confidence: 99%