Proceedings of the SIGCHI Conference on Human Factors in Computing Systems 2014
DOI: 10.1145/2556288.2557249
|View full text |Cite
|
Sign up to set email alerts
|

ThumbReels

Abstract: During online search, the user's expectations often differ from those of the author. This is known as the 'intention gap' and is particularly problematic when searching for and discriminating between online video content. An author uses description and meta-data tags to label their content, but often cannot predict alternate interpretations or appropriations of their work. To address this intention gap, we present ThumbReels, a concept for query-sensitive video previews generated from crowdsourced, temporally … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2015
2015
2021
2021

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 10 publications
(2 citation statements)
references
References 7 publications
0
2
0
Order By: Relevance
“…VidCrit captures reviewers' spoken comments and mouse interactions as well as physical actions of the reviewers and provides feedback interfaces of transcribed/segmented reviews. Craggs et al presented a query-sensitive animated preview generated from crowdsourced semantic tagging [3] for online videos. DeCamp and Roy proposed a system for annotating the location and identity of people and objects in a multi-camera video [4].…”
Section: Related Workmentioning
confidence: 99%
“…VidCrit captures reviewers' spoken comments and mouse interactions as well as physical actions of the reviewers and provides feedback interfaces of transcribed/segmented reviews. Craggs et al presented a query-sensitive animated preview generated from crowdsourced semantic tagging [3] for online videos. DeCamp and Roy proposed a system for annotating the location and identity of people and objects in a multi-camera video [4].…”
Section: Related Workmentioning
confidence: 99%
“…However, researchin providing user interface support for CBIR systems is scarce (Pečenovió et al, 2000;Santini and Jain, 2000;Nazakato et al, 2003;Heesch, 2008) until recently. Search engines designed to bridge the intention gap tend to integrate textual (keywords or tags) and visual features (Chen et al, 2013;Zhu et al, 2014;Craggs et al, 2014).…”
Section: Interactive Interfacementioning
confidence: 99%