Research on auditory verbal hallucinations (AVHs) indicates that AVH schizophrenia patients show greater abnormalities on tasks requiring recognition of affective prosody (AP) than non-AVH patients. Detecting AP requires accurate perception of manipulations in pitch, amplitude and duration. Schizophrenia patients with AVHs also experience difficulty detecting these acoustic manipulations; with a number of theorists speculating that difficulties in pitch, amplitude and duration discrimination underlie AP abnormalities. This study examined whether both AP and these aspects of auditory processing are also impaired in first degree relatives of persons with AVHs. It also examined whether pitch, amplitude and duration discrimination were related to AP, and to hallucination proneness. Unaffected relatives of AVH schizophrenia patients (N = 19) and matched healthy controls (N = 33) were compared using tone discrimination tasks, an AP task, and clinical measures. Relatives were slower at identifying emotions on the AP task (p = 0.002), with secondary analysis showing this was especially so for happy (p = 0.014) and neutral (p = 0.001) sentences. There was a significant interaction effect for pitch between tone deviation level and group (p = 0.019), and relatives performed worse than controls on amplitude discrimination and duration discrimination. AP performance for happy and neutral sentences was significantly correlated with amplitude perception. Lastly, AVH proneness in the entire sample was significantly correlated with pitch discrimination (r = 0.44) and pitch perception was shown to predict AVH proneness in the sample (p = 0.005). These results suggest basic impairments in auditory processing are present in relatives of AVH patients; they potentially underlie processing speed in AP tasks, and predict AVH proneness. This indicates auditory processing deficits may be a core feature of AVHs in schizophrenia, and are worthy of further study as a potential endophenotype for AVHs.
The ability of subjects to identify and reproduce brief temporal intervals is influenced by many factors whether they be stimulus-based, task-based or subject-based. The current study examines the role individual differences play in subsecond and suprasecond timing judgments, using the schizoptypy personality scale as a test-case approach for quantifying a broad range of individual differences. In two experiments, 129 (Experiment 1) and 141 (Experiment 2) subjects completed the O-LIFE personality questionnaire prior to performing a modified temporal-bisection task. In the bisection task, subjects responded to two identical instantiations of a luminance grating presented in a 4deg window, 4deg above fixation for 1.5 s (Experiment 1) or 3 s (Experiment 2). Subjects initiated presentation with a button-press, and released the button when they considered the stimulus to be half-way through (750/1500 ms). Subjects were then asked to indicate their ‘most accurate estimate’ of the two intervals. In this way we measure both performance on the task (a first-order measure) and the subjects’ knowledge of their performance (a second-order measure). In Experiment 1 the effect of grating-drift and feedback on performance was also examined. Experiment 2 focused on the static/no-feedback condition. For the group data, Experiment 1 showed a significant effect of presentation order in the baseline condition (no feedback), which disappeared when feedback was provided. Moving the stimulus had no effect on perceived duration. Experiment 2 showed no effect of stimulus presentation order. This elimination of the subsecond order-effect was at the expense of accuracy, as the mid-point of the suprasecond interval was generally underestimated. Response precision increased as a proportion of total duration, reducing the variance below that predicted by Weber’s law. This result is consistent with a breakdown of the scalar properties of time perception in the early suprasecond range. All subjects showed good insight into their own performance, though that insight did not necessarily correlate with the veridical bisection point. In terms of personality, we found evidence of significant differences in performance along the Unusual Experiences subscale, of most theoretical interest here, in the subsecond condition only. There was also significant correlation with Impulsive Nonconformity and Cognitive Disorganisation in the sub- and suprasecond conditions, respectively. Overall, these data support a partial dissociation of timing mechanisms at very short and slightly longer intervals. Further, these results suggest that perception is not the only critical mitigator of confidence in temporal experience, since individuals can effectively compensate for differences in perception at the level of metacognition in early suprasecond time. Though there are individual differences in performance, these are perhaps less than expected from previous reports and indicate an effective timing mechanism dealing with brief durations independent of the influence of...
This study investigated the degree to which two types of reduced auditory signals (cochlear implant simulations) and visual speech cues combined for speech identification. The auditory speech stimuli were filtered to have only amplitude envelope cues or both amplitude envelope and spectral cues and were presented with/without visual speech. In Experiment 1, IEEE sentences were presented in quiet and noise. For in-quiet presentation, speech identification was enhanced by the addition of both spectral and visual speech cues. Due to a ceiling effect, the degree to which these effects combined could not be determined. In noise, these facilitation effects were more marked and were additive. Experiment 2 examined consonant and vowel identification in the context of CVC or VCV syllables presented in noise. For consonants, both spectral and visual speech cues facilitated identification and these effects were additive. For vowels, the effect of combined cues was underadditive, with the effect of spectral cues reduced when presented with visual speech cues. Analysis indicated that without visual speech, spectral cues facilitated the transmission of place information and vowel height, whereas with visual speech, they facilitated lip rounding, with little impact on the transmission of place information.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.