2018
DOI: 10.3389/fpsyg.2018.01568
|View full text |Cite
|
Sign up to set email alerts
|

Commentary on “Interaction in Spoken Word Recognition Models”

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
9
0

Year Published

2019
2019
2025
2025

Publication Types

Select...
5
1
1

Relationship

1
6

Authors

Journals

citations
Cited by 10 publications
(9 citation statements)
references
References 13 publications
0
9
0
Order By: Relevance
“…For instance, an individual will report hearing a speech sound that has been replaced by noise (e.g., hearing the phoneme /s/ in the frame legi_lature , where the critical phoneme has been replaced by a cough; Warren, 1970), and an individual's estimate of an object's size can be influenced by the width between a person's hands (Stefanucci & Geuss, 2009). While such context effects are ubiquitous across a range of domains in cognitive psychology, an ongoing debate––whether in the domain of language (e.g., Magnuson, Mirman, Luthra, Strauss, & Harris, 2018; Norris, McQueen, & Cutler, 2018) or the domain of vision (e.g., Firestone & Scholl, 2014, 2016; Gilbert & Li, 2013; Lupyan, Abdel Rahman, Boroditsky, & Clark, 2020; Schnall, 2017a, 2017b)––centers on how contextual information is integrated with sensory signals. In particular, do contextual effects on sensory processing reflect influences on perception itself, or does context only affect an individual's postperceptual decisions?…”
Section: Introductionmentioning
confidence: 99%
“…For instance, an individual will report hearing a speech sound that has been replaced by noise (e.g., hearing the phoneme /s/ in the frame legi_lature , where the critical phoneme has been replaced by a cough; Warren, 1970), and an individual's estimate of an object's size can be influenced by the width between a person's hands (Stefanucci & Geuss, 2009). While such context effects are ubiquitous across a range of domains in cognitive psychology, an ongoing debate––whether in the domain of language (e.g., Magnuson, Mirman, Luthra, Strauss, & Harris, 2018; Norris, McQueen, & Cutler, 2018) or the domain of vision (e.g., Firestone & Scholl, 2014, 2016; Gilbert & Li, 2013; Lupyan, Abdel Rahman, Boroditsky, & Clark, 2020; Schnall, 2017a, 2017b)––centers on how contextual information is integrated with sensory signals. In particular, do contextual effects on sensory processing reflect influences on perception itself, or does context only affect an individual's postperceptual decisions?…”
Section: Introductionmentioning
confidence: 99%
“…Indeed, Norris et al (2018) make precisely this claim, and say the only reason feedback helps in TRACE is "because its initial behavior is suboptimal". Although we disagree about "suboptimality"…”
Section: Discussionmentioning
confidence: 93%
“…Suboptimal behavior is not necessarily incorrect behavior; the interesting question is to what degree a model corresponds with humans at behavioral, algorithmic, and neural levels (Magnuson et al, 2018), and optimality is an interesting baseline to consider. Norris et al (2018) argue that "...the best that any speech recognition system can do is compute the match between input features and lexical representations and select the best-matching word (more specifically, pick the word with maximum likelihood) ... Shortlist B...by virtue of implementing Bayesian inference, performs optimally; its use of Bayes' rule guarantees that the best-matching word must be recognized (p. 1)". Some details here may help us better understand and possibly refine the debate between autonomous (without feedback) and interactive (with feedback) accounts.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…The details of the cognitive processes underlying human speech processing are still contentious. A long-standing debate revolves around the importance and timing of top-down versus bottom-up influence for word recognition during speech comprehension [2,3]. Certain autonomous models (e.g.…”
Section: Introductionmentioning
confidence: 99%