Companion Publication of the 21st International Conference on Intelligent User Interfaces 2016
DOI: 10.1145/2876456.2879473
|View full text |Cite
|
Sign up to set email alerts
|

Semantic Sketch-Based Video Retrieval with Autocompletion

Abstract: The IMOTION system is a content-based video search engine that provides fast and intuitive known item search in large video collections. User interaction consists mainly of sketching, which the system recognizes in real-time and makes suggestions based on both visual appearance of the sketch (what does the sketch look like in terms of colors, edge distribution, etc.) and semantic content (what object is the user sketching). The latter is enabled by a predictive sketch-based UI that identifies likely candidates… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2016
2016
2022
2022

Publication Types

Select...
3
2
1

Relationship

1
5

Authors

Journals

citations
Cited by 7 publications
(2 citation statements)
references
References 13 publications
0
2
0
Order By: Relevance
“…sketch-text generation [22], sketch animation [44], sketch-based user interface retrieval [23], etc.) More related to our work are DuetDraw [37] and IMOTION [46]. DuetDraw presents an interface where AI and humans can draw pictures interactively to conduct further sketch colorization, showing great user experience and promising application prospects in human-AI co-creation.…”
Section: Related Workmentioning
confidence: 99%
“…sketch-text generation [22], sketch animation [44], sketch-based user interface retrieval [23], etc.) More related to our work are DuetDraw [37] and IMOTION [46]. DuetDraw presents an interface where AI and humans can draw pictures interactively to conduct further sketch colorization, showing great user experience and promising application prospects in human-AI co-creation.…”
Section: Related Workmentioning
confidence: 99%
“…Either independently or in conjunction with these sketches, semantic information can be expressed by keywords or descriptions that are derived from manual or machine generated annotations [14,31]. There are some approaches that try to bridge the divide between the visual sketches and the textual annotations, by generating the semantic labels based on sketched input [6,30]. However, these methods differ mostly in the user-facing query formulation stage and do not offer any different or even richer query information.…”
Section: Related Workmentioning
confidence: 99%