2022
DOI: 10.1101/2022.06.27.497678
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

On the similarities of representations in artificial and brain neural networks for speech recognition

Abstract: How the human brain supports speech comprehension is an important question in neuroscience. Studying the neurocomputational mechanisms underlying human language is not only critical to understand and develop treatments for many human conditions that impair language and communication but also to inform artificial systems that aim to automatically process and identify natural speech. In recent years, intelligent machines powered by deep learning have achieved near human level of performance in speech recognition… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
4
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

1
0

Authors

Journals

citations
Cited by 1 publication
(5 citation statements)
references
References 71 publications
1
4
0
Order By: Relevance
“…Our findings imply that developing appropriate intermediate representations for articulatory features may be central to speech recognition in both human and machine solutions. In human neuroscience studies, this account is consistent with previous findings of articulatory feature representation in the human auditory cortex (Mesgarani et al, 2014;Correia et al, 2015;Wingfield et al, 2017), but awaits further investigation and exploitation in machine solutions for speech recognition. In particular, previous work by Hamilton et al (2021) has shown that-unlike our DNN architecture-the organization of early speech areas in the brain are not purely hierarchical, suggesting new potential avenues of model architectures including layerbypassing connections.…”
Section: Discussionsupporting
confidence: 86%
See 4 more Smart Citations
“…Our findings imply that developing appropriate intermediate representations for articulatory features may be central to speech recognition in both human and machine solutions. In human neuroscience studies, this account is consistent with previous findings of articulatory feature representation in the human auditory cortex (Mesgarani et al, 2014;Correia et al, 2015;Wingfield et al, 2017), but awaits further investigation and exploitation in machine solutions for speech recognition. In particular, previous work by Hamilton et al (2021) has shown that-unlike our DNN architecture-the organization of early speech areas in the brain are not purely hierarchical, suggesting new potential avenues of model architectures including layerbypassing connections.…”
Section: Discussionsupporting
confidence: 86%
“…. Kriegeskorte et al, 2008b;Cadieu et al, 2014;Clarke et al, 2014;Khaligh-Razavi and Kriegeskorte, 2014;Güçlü and van Gerven, 2015;Kriegeskorte, 2015;Cichy et al, 2016;Kheradpisheh et al, 2016;Devereux et al, 2018), with less progress made in speech perception (though see our previous work: Su et al, 2014;Wingfield et al, 2017).…”
Section: Relating Dynamic Brain and Machine States: Comparing And Con...mentioning
confidence: 96%
See 3 more Smart Citations