2011
DOI: 10.1523/jneurosci.2074-11.2011
|View full text |Cite
|
Sign up to set email alerts
|

Multiplexed and Robust Representations of Sound Features in Auditory Cortex

Abstract: We can recognize the melody of a familiar song when it is played on different musical instruments. Similarly, an animal must be able to recognize a warning call whether the caller has a high-pitched female or a lower-pitched male voice, and whether they are sitting in a tree to the left or right. This type of perceptual invariance to "nuisance" parameters comes easily to listeners, but it is unknown whether or how such robust representations of sounds are formed at the level of sensory cortex. In this study, w… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

16
108
0

Year Published

2014
2014
2022
2022

Publication Types

Select...
5
2
2

Relationship

0
9

Authors

Journals

citations
Cited by 118 publications
(124 citation statements)
references
References 59 publications
16
108
0
Order By: Relevance
“…formant frequency) that are robust to variation in other physical parameters of the sound (e.g. azimuthal location) and such responses have been found in secondary mammalian and avian auditory areas [91,92]. In terms of higher-level categorization, research in starlings points to a role of the Caudio-Medial Nidopallium (NCM) for classifying behaviorally relevant classes of songs [93] and research in primates suggests that both the Superior Temporal Gyrus (STG) and the ventrolateral Prefontral Cortex (vPFC) could be involved in semantic discrimination [94][95][96][97][98].…”
Section: Animal Vocalizationsmentioning
confidence: 86%
“…formant frequency) that are robust to variation in other physical parameters of the sound (e.g. azimuthal location) and such responses have been found in secondary mammalian and avian auditory areas [91,92]. In terms of higher-level categorization, research in starlings points to a role of the Caudio-Medial Nidopallium (NCM) for classifying behaviorally relevant classes of songs [93] and research in primates suggests that both the Superior Temporal Gyrus (STG) and the ventrolateral Prefontral Cortex (vPFC) could be involved in semantic discrimination [94][95][96][97][98].…”
Section: Animal Vocalizationsmentioning
confidence: 86%
“…Indeed, the available cortical and imaging data indicates it is not (King and Middlebrooks 2010;Ahveninen et al 2014) and could be based instead on a logical representation of space operating as a network of neural interconnections. The temporally complex and integrative nature of auditory cortical processing (Walker et al 2011;Bizley and Cohen 2013) and the need to integrate nonauditory cues (e.g., Goossens and van Opstal 1999) suggests that auditory space and the objects within it will ultimately depend on diverse inputs. Importantly, our stimuli were two spectro-temporally complex stimuli chosen so that their various components would strongly bind to one of two perceptual objects.…”
Section: Discussionmentioning
confidence: 99%
“…The presence of multiple auditory cortical areas on the ectosylvian gyrus (EG) of this species was first demonstrated by using 2‐deoxyglucose autoradiography (Wallace et al, 1997) and subsequently confirmed by using optical imaging of intrinsic signals (Nelken et al, 2004) and single‐unit recording (Kelly et al, 1986; Kelly and Judge, 1994; Kowalski et al, 1995; Bizley et al, 2005). Although most electrophysiological recording studies have focused on the primary auditory cortex (A1) (Phillips et al, 1988; Kowalski et al, 1996; Schnupp et al, 2001; Fritz et al, 2003; Rabinowitz et al, 2011; Keating et al, 2013), the nonprimary auditory fields in this species are now receiving increasing attention (Nelken et al, 2008; Bizley et al, 2009, 2010, 2013; Walker et al, 2011; Atiani et al, 2014). …”
mentioning
confidence: 99%