2012
DOI: 10.1007/978-3-642-27355-1_77
|View full text |Cite
|
Sign up to set email alerts
|

Context-Aware Querying for Multimodal Search Engines

Abstract: Abstract. Multimodal interaction provides the user with multiple modes of interacting with a system, such as gestures, speech, text, video, audio, etc. A multimodal system allows for several distinct means for input and output of data. In this paper, we present our work in the context of the I-SEARCH project, which aims at enabling context-aware querying of a multimodal search framework including real-world data such as user location or temperature. We introduce the concepts of MuSeBag for multimodal query int… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
6
0

Year Published

2012
2012
2024
2024

Publication Types

Select...
2
2
1

Relationship

2
3

Authors

Journals

citations
Cited by 8 publications
(6 citation statements)
references
References 11 publications
0
6
0
Order By: Relevance
“…Of course, a lot of effort is being made to integrate various elements of interaction and develop new protocols and architectures that allow their use [2,5,8]. In fact, the W3C has a specific group for this work [7,12].…”
Section: State Of the Artmentioning
confidence: 99%
“…Of course, a lot of effort is being made to integrate various elements of interaction and develop new protocols and architectures that allow their use [2,5,8]. In fact, the W3C has a specific group for this work [7,12].…”
Section: State Of the Artmentioning
confidence: 99%
“…It uses a JavaScript-based component called UIIFace [4], which enables the user to interact with I-SEARCH via a wide range of modern input modalities like touch, gestures, or speech. Therefore it provides an adaptive algorithm for gesture recognition along with support for novel input devices like Microsofts Kinect in a web environment.…”
Section: Fig 3 Automatic Adaption Of I-search Gui To Different Devimentioning
confidence: 99%
“…Therefore it provides an adaptive algorithm for gesture recognition along with support for novel input devices like Microsofts Kinect in a web environment. The GUI also provides a WebSocket-based collaborative search tool called CoFind [4] that enables users to search collaboratively via a shared results basket, and to exchange messages throughout the search process. A third component called pTag [4] produces personalized tag recommendations to create search queries, filter results and add tags to retrieved result items.…”
Section: Fig 3 Automatic Adaption Of I-search Gui To Different Devimentioning
confidence: 99%
“…The I-SEARCH graphical user interface (GUI) is implemented with the objective of sharing one common code base for all possible input devices (Subfigure 1b shows mobile devices of different screen sizes and operating systems). It uses a JavaScript-based component called UIIFace [8], which enables the user to interact with I-SEARCH via a wide range of modern input modalities like touch, gestures, or speech. The GUI also provides a WebSocket-based collaborative search tool called CoFind [8] that enables users to search collaboratively via a shared results basket, and to exchange messages throughout the search process.…”
Section: Graphical User Interfacementioning
confidence: 99%
“…It uses a JavaScript-based component called UIIFace [8], which enables the user to interact with I-SEARCH via a wide range of modern input modalities like touch, gestures, or speech. The GUI also provides a WebSocket-based collaborative search tool called CoFind [8] that enables users to search collaboratively via a shared results basket, and to exchange messages throughout the search process. A third component called pTag [8] produces personalized tag recommendations to create, tag, and filter search queries and results.…”
Section: Graphical User Interfacementioning
confidence: 99%