2017
DOI: 10.1044/2016_aja-16-0059
|View full text |Cite
|
Sign up to set email alerts
|

Child–Adult Differences in Using Dual-Task Paradigms to Measure Listening Effort

Abstract: Purpose: The purpose of the project was to investigate the effects modifying the secondary task in a dual-task paradigm to measure objective listening effort. To be specific, the complexity and depth of processing were increased relative to a simple secondary task. Method: Three dual-task paradigms were developed for school-age children. The primary task was word recognition. The secondary task was a physical response to a visual probe (simple task), a physical response to a complex probe (increased complexity… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
19
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
6
1

Relationship

2
5

Authors

Journals

citations
Cited by 19 publications
(21 citation statements)
references
References 63 publications
2
19
0
Order By: Relevance
“…Relative to older children, younger children are more likely to demonstrate worse speech recognition performance in noise (Klatte et al, 2010b; Neuman et al, 2010) and in reverberation (Neuman and Hochberg, 1983), so they might also be more vulnerable to the effects of noise and reverberation on listening effort. Conversely, the younger children tend to be more variable on some measures of listening effort (Picou et al, 2017a) and the additional variability might limit the possibility of demonstrating significant effects of reverberation on listening effort. Exploratory analysis with the current data set revealed a similar pattern of results with children when divided into four age groups (10–11, 12–13, 14–15, and 16–17 years).…”
Section: Discussionmentioning
confidence: 99%
See 3 more Smart Citations
“…Relative to older children, younger children are more likely to demonstrate worse speech recognition performance in noise (Klatte et al, 2010b; Neuman et al, 2010) and in reverberation (Neuman and Hochberg, 1983), so they might also be more vulnerable to the effects of noise and reverberation on listening effort. Conversely, the younger children tend to be more variable on some measures of listening effort (Picou et al, 2017a) and the additional variability might limit the possibility of demonstrating significant effects of reverberation on listening effort. Exploratory analysis with the current data set revealed a similar pattern of results with children when divided into four age groups (10–11, 12–13, 14–15, and 16–17 years).…”
Section: Discussionmentioning
confidence: 99%
“…Behavioral listening effort was evaluated using a dual-task paradigm. The paradigm, described in detail by Picou et al (2017a), included a primary task (monosyllable word recognition) and a secondary task (physical response to a visual probe). The monosyllable words, spoken by a female talker with an American English accent, were all nouns.…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…This stipulates tha 1 t a shallow level of processing a piece of an auditory, linguistic signal would be the rapid analysis of acoustic characteristics such as pitch or rhythm, whereas deeper levels of processing vary from rapid mapping syllabic information to pre-existing phonological patterns (Rönnberg et al 2013) to the extraction and elaboration of semantic meaning and associations (Eysenck & Eysenck 1979). Recent studies such as Picou and Ricketts (2014), Picou et al (2017), andHsu et al (2017) have applied the depth of processing framework in their design of behavioural paradigms investigating the effects of background noise levels as well as changing secondary tasks on listening effort outcome. These paradigms challenged the participants to recognize spoken speech in unfavourable conditions (such as in noise) and thereby required the activation of explicit cognitive resources such as working memory (Rönnberg et al 2013).…”
Section: Depth Of Semantic Processingmentioning
confidence: 99%