“…Speech includes recorded voice or synthetic voices generated with text-to-speech applications. Associating non-speech sounds (auditory icons and earcons) with events has been shown to improve the interaction of graphical human-computer interfaces (Gaver et al, 1991;Blattener et al, 1992;DiGiano et al, 1993;Brewster et al, 1996). Auditory icons are everyday sounds can intuitively be associated with system events (Gaver et al, 1991;Gaver, 1993a,b), while earcons are abstract, musical tones that can be used in structured combinations to indicate audio messages (Blattner et al, 1989;Brewster, 1998).…”