2006
DOI: 10.1007/s00530-006-0052-y
|View full text |Cite
|
Sign up to set email alerts
|

An interface for mutual disambiguation of recognition errors in a multimodal navigational assistant

Abstract: Users often have tasks that can be accomplished with the aid of multiple media -for example with text, sound and pictures. For example, communicating an urban navigation route can be expressed with pictures and text. Today's mobile devices have multimedia capabilities; cell phones have cameras, displays, sound output, and (soon) speech recognition. Potentially, these multimedia capabilities can be used for multimediaintensive tasks, but two things stand in the way. First, recognition of visual input and speech… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0
1

Year Published

2009
2009
2015
2015

Publication Types

Select...
2
1
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(2 citation statements)
references
References 14 publications
0
1
0
1
Order By: Relevance
“…,Bell, Feiner, and Höllerer (2002),Calvary et al (2003),,Hong, Dickson, Chiu, Shen, and Kafeza (2007),Korhonen et al (2007),Kurvinen, Lähteenmäki, Salovaara, and Lopez (2007),Lieberman and Chu (2007),Lum and Lau (2002),Mäntyjärvi and Seppänen (2003),Rehman, Stajano, and Coulouris (2007),Selker (2004),Smailagic and Siewiorek (2002) UsabilityBarnard, Yi, Jacko, and Sears (2005),Burrell and Gay (2002),Kaasinen (2003) Distribution of articles number by year and classification framework.…”
mentioning
confidence: 98%
“…,Bell, Feiner, and Höllerer (2002),Calvary et al (2003),,Hong, Dickson, Chiu, Shen, and Kafeza (2007),Korhonen et al (2007),Kurvinen, Lähteenmäki, Salovaara, and Lopez (2007),Lieberman and Chu (2007),Lum and Lau (2002),Mäntyjärvi and Seppänen (2003),Rehman, Stajano, and Coulouris (2007),Selker (2004),Smailagic and Siewiorek (2002) UsabilityBarnard, Yi, Jacko, and Sears (2005),Burrell and Gay (2002),Kaasinen (2003) Distribution of articles number by year and classification framework.…”
mentioning
confidence: 98%
“…Eksplicitan način interakcije najčešće se obavlja korišćenjem ekrana, često osetljivih na dodir, koji su pričvršćeni na unapred odreñene površine u okviru inteligentnog okruženja (Kahl, 2011), ili su deo mobilnih ureñaja kojima korisnici rukuju (mobilni telefoni, PDA i tablet računari i sl) (Shin, 2010). Neki drugi načini interakcije uključuju glasovne komande i zvučna obaveštenja (Hatala, 2005), klasične računarske sprege (miš i tastaturu) ili neke druge netipične sprege poput svetlosnih indikatora (Bjelica, 2011d), zadavanja akcija gestovima rukom ili pokretima tela (Bigdelou, 2012) (Hatala, 2005;Hong, 2007;Korhonen, 2007;Lieberman, 2007;Dey, 2009;Bjelica, 2011e).…”
Section: Nivo Korisničke Spregeunclassified