Digest of Papers. Second International Symposium on Wearable Computers (Cat. No.98EX215)
DOI: 10.1109/iswc.1998.729536
|View full text |Cite
|
Sign up to set email alerts
|

Speaking and listening on the run: design for wearable audio computing

Abstract: The use of speech and auditory interaction on wearable computers can provide an awareness of events and personal messages, without requiring one's full attention or disrupting the foreground activity. A passive "handsand-eyes-free" approach is appropriate when users need convenient and timely access to remote information and communication services. Nomadic Radio is a distributed computing platform for wearable access to unified messaging via an auditory interface. We demonstrate the use of auditory cues, spati… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
14
0
1

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 29 publications
(15 citation statements)
references
References 24 publications
0
14
0
1
Order By: Relevance
“…Firstly, they are usually structured hierarchically, but contain little navigational information, often leading the user to become lost in the menu structure (Wolf et al 1995). Secondly, the sequential nature of speech does not allow for simultaneous browsing of information, and therefore places heavy demands on short-term memory (Schumacher et al 1995, Sawhney andSchmandt 1998). Brewster (1997) addressed these issues by using earcons as navigation cues for telephone-based interfaces, but did not extend this technique to speech only interaction with automated mobile phone services.…”
Section: Introductionmentioning
confidence: 99%
“…Firstly, they are usually structured hierarchically, but contain little navigational information, often leading the user to become lost in the menu structure (Wolf et al 1995). Secondly, the sequential nature of speech does not allow for simultaneous browsing of information, and therefore places heavy demands on short-term memory (Schumacher et al 1995, Sawhney andSchmandt 1998). Brewster (1997) addressed these issues by using earcons as navigation cues for telephone-based interfaces, but did not extend this technique to speech only interaction with automated mobile phone services.…”
Section: Introductionmentioning
confidence: 99%
“…Past researches have shown that humans have the ability to handle/process simultaneous streams of input through auditory channel and focus on one of them selectively (Sawhney and Schmandt, 1998). Ideally, by saving hand and eyes' focus on the undergoing tasks, speech input and auditory output are more suited to survey needs to feed information into data-recording equipment in a timely/convenient way.…”
Section: Speech-enabled Multi-modal Interfacementioning
confidence: 99%
“…On the other hand, speech/audio interaction is in essence sequential in time. It lacks the browsing and prefetching support as offered by image/text-based display (Sawhney and Schmandt, 1998).…”
Section: Speech-enabled Multi-modal Interfacementioning
confidence: 99%
“…Work on Nomadic Radio by Sawnhey and Schmandt [9] has also considered diverse kinds of awareness for mobile users, but has worked to integrate and present them in a purely audio format. By comparison, our systems look and feel fairly conventional for corporate users.…”
Section: Related Workmentioning
confidence: 99%