2019
DOI: 10.1371/journal.pone.0219927
|View full text |Cite
|
Sign up to set email alerts
|

Evaluating time-reversed speech and signal-correlated noise as auditory baselines for isolating speech-specific processing using fNIRS

Abstract: Evidence using well-established imaging techniques, such as functional magnetic resonance imaging and electrocorticography, suggest that speech-specific cortical responses can be functionally localised by contrasting speech responses with an auditory baseline stimulus, such as time-reversed (TR) speech or signal-correlated noise (SCN). Furthermore, these studies suggest that SCN is a more effective baseline than TR speech. Functional near-infrared spectroscopy (fNIRS) is a relatively novel, optically-based ima… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
21
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
7
1

Relationship

3
5

Authors

Journals

citations
Cited by 14 publications
(22 citation statements)
references
References 111 publications
1
21
0
Order By: Relevance
“…Signal-correlated noise (SCN) formed the second auditory condition. SCN is a non-speech signal which is modulated but is unintelligible, and has been used previously in language studies involving neuroimaging (e.g., Stoppelman et al, 2013 ; Brown et al, 2014 ; Mushtaq et al, 2019 ). The third auditory stimulus was steady speech-shaped noise (SSSN), an unmodulated equivalent of SCN.…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…Signal-correlated noise (SCN) formed the second auditory condition. SCN is a non-speech signal which is modulated but is unintelligible, and has been used previously in language studies involving neuroimaging (e.g., Stoppelman et al, 2013 ; Brown et al, 2014 ; Mushtaq et al, 2019 ). The third auditory stimulus was steady speech-shaped noise (SSSN), an unmodulated equivalent of SCN.…”
Section: Methodsmentioning
confidence: 99%
“…Analyses of fNIRS measurements were conducted in MATLAB in conjunction with functions from the HOMER2 package (Huppert et al, 2009 ) and custom scripts developed in our laboratory and previously used in our work (Dewey and Hartley, 2015 ; Wiggins et al, 2016 ; Anderson et al, 2017 , 2019 ; Wijayasiri et al, 2017 ; Lawrence et al, 2018 ; Mushtaq et al, 2019 ). Channels with poor optode-scalp contact were removed using the scalp coupling index (SCI) technique by Pollonini et al ( 2014 ).…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Additionally, fNIRS has a much higher temporal resolution than fMRI, with sampling rates of up to 100Hz compared to fMRI's 0.5 Hz. 36 This allows for both event-related 35,37 and block designs. 38 However, it is important to note that this is still much slower than methods such as EEG that do not rely on sluggish hemodynamic responses, and instead measure more instantaneous electrical pulses.…”
Section: Benefits and Limitations Of Optical Imagingmentioning
confidence: 99%
“…A primary use has been the investigation of cortical processing of physical qualities of sound, such as intensity, and amplitude and frequency modulations, and auditory-spatial cues (Weder et al, 2020;Weder et al, 2018;Zhang et al, 2018). fNIRS has also been employed to evaluate the perceptual qualities of speech and listening effort, as well as language development in normal-hearing and hearing-impaired populations (Anderson et al, 2019;Lawrence et al, 2018;Mushtaq et al, 2019;Pollonini et al, 2014;Rovetti et al, 2019;Rowland et al, 2018;Sevy et al, 2010;Wiggins et al, 2016b;Wijayasiri et al, 2017;Zhang et al, 2020). Research questions relating to the development of auditory cortical function (Gervain et al, 2008), and cortical reorganization following impaired sensory input and subsequent rehabilitation (Anderson et al, 2017;Wiggins and Hartley, 2015) have been investigated using fNIRS, as have outcomes related to cochlear implantation (Anderson et al, 2019) and auditory pathologies such as tinnitus (Basura et al, 2018;Shoushtarian et al, 2020).…”
Section: Introductionmentioning
confidence: 99%