2021
DOI: 10.1145/3458725
|View full text |Cite
|
Sign up to set email alerts
|

Identification of Words and Phrases Through a Phonemic-Based Haptic Display

Abstract: Stand-alone devices for tactile speech reception serve a need as communication aids for persons with profound sensory impairments as well as in applications such as human-computer interfaces and remote communication when the normal auditory and visual channels are compromised or overloaded. The current research is concerned with perceptual evaluations of a phoneme-based tactile speech communication device in which a unique tactile code was assigned to each of the 24 consonants and 15 vowels of English. The tac… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
13
2

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
1
1
1

Relationship

1
7

Authors

Journals

citations
Cited by 13 publications
(17 citation statements)
references
References 44 publications
2
13
2
Order By: Relevance
“…Following our neural observations, we tested for an auditory masking effect, i.e, putative consequences of the anti-phase tactile stimulation on the audibility of the target in the tone. Contrary to our hypothesis, our neural results, and previous findings from brief tactile (Gick, 2008;Gick & Derrick, 2009;Gillmeister & Eimer, 2007;Reed et al, 2021;Reed et al, 2019;Schürmann et al, 2004;Wilson et al, 2009Wilson et al, , 2010 or transcranial electric stimulation (Riecke et al, 2015), we found no such auditory effect: neither tactile stimulation nor its relative phase had a significant impact on listeners' target-detection performance.…”
Section: 2contrasting
confidence: 99%
See 1 more Smart Citation
“…Following our neural observations, we tested for an auditory masking effect, i.e, putative consequences of the anti-phase tactile stimulation on the audibility of the target in the tone. Contrary to our hypothesis, our neural results, and previous findings from brief tactile (Gick, 2008;Gick & Derrick, 2009;Gillmeister & Eimer, 2007;Reed et al, 2021;Reed et al, 2019;Schürmann et al, 2004;Wilson et al, 2009Wilson et al, , 2010 or transcranial electric stimulation (Riecke et al, 2015), we found no such auditory effect: neither tactile stimulation nor its relative phase had a significant impact on listeners' target-detection performance.…”
Section: 2contrasting
confidence: 99%
“…Murray et al (2005) found that the application of tactile pulses to a hand can accelerate the detection of an auditory noise burst, regardless of whether the tactile and auditory inputs are perceived to originate from the same or different locations. Brief tactile stimuli can also modulate the perception of more complex natural sounds, such as speech (Gick, 2008; Gick & Derrick, 2009; Reed et al, 2021; Reed et al, 2019), possibly as a consequence of the auditory enhancement described above. Regarding neural correlates, human brain studies have shown that the enhancing auditory effect of tactile stimulation may emerge in auditory cortical regions, including the primary auditory cortex (Hoefer et al, 2013) and auditory association areas (Foxe et al, 2002; Murray et al, 2005).…”
Section: Introductionmentioning
confidence: 99%
“…For example, in DeGuglielmo et al (2021) a 4 actuator system is used to enhance music discrimination in a live concert scenario by creating a custom mapping scheme between the incoming signal and the frequency and amplitude of the transducers. A contrasting goal is presented by Reed et al (2021), where phoneme identification in speech is improved by using a total of 24 actuators. These two examples have different objectives, thus the systems have different requirements, but the overall architecture of both follows the one shown in Figure 1.…”
Section: Figurementioning
confidence: 99%
“…The present study was motivated by recent research on phoneme-based tactile speech communication systems conducted at Faceboook/Meta [1][2][3], Rice University [4,5], McGill University [6][7][8], and a collaborative effort between Purdue University and MIT [9][10][11][12][13][14]. Our approach assumes that the front end of the device contains a module for producing a string of phonemes extracted from either the acoustic speech signal (using automatic speech recognition) or written text (using a text-to-speech converter).…”
Section: Introductionmentioning
confidence: 99%