2021
DOI: 10.3389/fnins.2021.728686
|View full text |Cite
|
Sign up to set email alerts
|

Formant-Based Recognition of Words and Other Naturalistic Sounds in Rhesus Monkeys

Abstract: In social animals, identifying sounds is critical for communication. In humans, the acoustic parameters involved in speech recognition, such as the formant frequencies derived from the resonance of the supralaryngeal vocal tract, have been well documented. However, how formants contribute to recognizing learned sounds in non-human primates remains unclear. To determine this, we trained two rhesus monkeys to discriminate target and non-target sounds presented in sequences of 1–3 sounds. After training, we perfo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
1

Relationship

1
0

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 73 publications
(91 reference statements)
0
1
0
Order By: Relevance
“…This mechanism could rely on the integration of visual and auditory representations in multisensory areas of the brain (Gaffan and Harrison, 1991; Calvert et al, 2001; Beauchamp et al, 2004; Khandhadia et al, 2021; Diehl et al, 2022; Lemus and Lafuente, 2022), or from connecting neuronal representations of monkey calls at the superior temporal gyrus (STG)(Leaver and Rauschecker, 2010; Tsunada et al, 2011; Bodin and Belin, 2020; Bodin et al, 2021) with unitary activity associated to a monkey face at the superior temporal sulcus (STS)(Tsao et al, 2003; Leopold et al, 2006; Ohayon et al, 2012; Arcaro et al, 2017; Khandhadia et al, 2021). Regardless of NHP being able to discriminate words phonetically(Melchor et al, 2021), it is unclear whether their brains can encode complex sounds like words (Hickok and Poeppel, 2007; Yi et al, 2019; Morán et al, 2021; Stephen et al, 2023) and associate them with visual representations to experience CME.…”
Section: Introductionmentioning
confidence: 99%
“…This mechanism could rely on the integration of visual and auditory representations in multisensory areas of the brain (Gaffan and Harrison, 1991; Calvert et al, 2001; Beauchamp et al, 2004; Khandhadia et al, 2021; Diehl et al, 2022; Lemus and Lafuente, 2022), or from connecting neuronal representations of monkey calls at the superior temporal gyrus (STG)(Leaver and Rauschecker, 2010; Tsunada et al, 2011; Bodin and Belin, 2020; Bodin et al, 2021) with unitary activity associated to a monkey face at the superior temporal sulcus (STS)(Tsao et al, 2003; Leopold et al, 2006; Ohayon et al, 2012; Arcaro et al, 2017; Khandhadia et al, 2021). Regardless of NHP being able to discriminate words phonetically(Melchor et al, 2021), it is unclear whether their brains can encode complex sounds like words (Hickok and Poeppel, 2007; Yi et al, 2019; Morán et al, 2021; Stephen et al, 2023) and associate them with visual representations to experience CME.…”
Section: Introductionmentioning
confidence: 99%