2022
DOI: 10.3390/brainsci12050681
|View full text |Cite
|
Sign up to set email alerts
|

DIANA, a Process-Oriented Model of Human Auditory Word Recognition

Abstract: This article presents DIANA, a new, process-oriented model of human auditory word recognition, which takes as its input the acoustic signal and can produce as its output word identifications and lexicality decisions, as well as reaction times. This makes it possible to compare its output with human listeners’ behavior in psycholinguistic experiments. DIANA differs from existing models in that it takes more available neuro-physiological evidence on speech processing into account. For instance, DIANA accounts fo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
11
0

Year Published

2022
2022
2025
2025

Publication Types

Select...
5
2

Relationship

2
5

Authors

Journals

citations
Cited by 12 publications
(11 citation statements)
references
References 107 publications
0
11
0
Order By: Relevance
“…Still, such a representation of the mental lexicon does not capture effects related to rating-based measures of semantic richness like concreteness, valence, or arousal. It seems that models that make use of top-down weights, such as TRACE (McClelland & Elman, 1986) or DIANA (Ten Bosch et al, 2015, 2022), have a way of capturing these effects if the values of these variables are stored within the lexicon. It would be best if we could represent these through separate top-down weights or resting activation increases in order to avoid conflating different factors under a single factor.…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…Still, such a representation of the mental lexicon does not capture effects related to rating-based measures of semantic richness like concreteness, valence, or arousal. It seems that models that make use of top-down weights, such as TRACE (McClelland & Elman, 1986) or DIANA (Ten Bosch et al, 2015, 2022), have a way of capturing these effects if the values of these variables are stored within the lexicon. It would be best if we could represent these through separate top-down weights or resting activation increases in order to avoid conflating different factors under a single factor.…”
Section: Discussionmentioning
confidence: 99%
“…Instead, the process of lexical access in models of spoken word recognition most often begins with the processing of (pseudo)acoustic features or phonemes and ends with detection of the correct form (unit/word) in the mental lexicon, while effectively eschewing any meaning activation (see also Gaskell & Marslen-Wilson, 2002, for additional critique). The mental lexicon is then usually represented as a semantically unconnected list of words—words are often strings of phonemes, related to each other by form only (as in, e.g., Hannagan et al, 2013; Luce, 1986; Luce et al, 2000; Luce & Pisoni, 1998; Marslen-Wilson, 1987; Marslen-Wilson & Tyler, 1980; McClelland & Elman, 1986; Norris, 1994; Norris & McQueen, 2008; Ten Bosch et al, 2015; You & Magnuson, 2018).…”
Section: Semantic Richness In the Auditory Modalitymentioning
confidence: 99%
See 1 more Smart Citation
“…Many theories of spoken word recognition assume the existence of some kind of mental lexicon that links phonetic representations of words with their syntactic and semantic features (see e.g. Cutler, 2012 ; ten Bosch et al, 2022 and the citations therein for more details). Theories agree that the activation of words' representations involves both bottom-up (acoustic-phonetic matching) and top-down (context-depending prediction).…”
Section: Introductionmentioning
confidence: 99%
“…The exact set of units is predetermined by the model developer, avoiding the issue of learning what these units are in the Multimedia Computing Group, Delft University of Technology, Delft, The Netherlands first place. Even the recently introduced DIANA model [10], which does away with fixed pre-lexical units, uses a set of predetermined lexical units.…”
Section: Introductionmentioning
confidence: 99%