We address the following question: Is there a difference (D) between the amount of time for auditory and visual stimuli to be perceived? On each of 1000 trials, observers were presented with a light-sound pair, separated by a stimulus onset asynchrony (SOA) between -250 ms (sound first) and +250 ms. Observers indicated if the light-sound pair came on simultaneously by pressing one of two (yes or no) keys. The SOA most likely to yield affirmative responses was defined as the point of subjective simultaneity (PSS). PSS values were between -21 ms (i.e. sound 21 ms before light) and +150 ms. Evidence is presented that each PSS is observer specific. In a second experiment, each observer was tested using two observer-stimulus distances. The resultant PSS values are highly correlated (r = 0.954, p = 0.003), suggesting that each observer's PSS is stable. PSS values were significantly affected by observer-stimulus distance, suggesting that observers do not take account of changes in distance on the resultant difference in arrival times of light and sound. The difference RTd in simple reaction time to single visual and auditory stimuli was also estimated; no evidence that RTd is observer specific or stable was found. The implications of these findings for the perception of multisensory stimuli are discussed.
This study investigated peripheral vision (at least 30° eccentric to fixation) development in profoundly deaf children without cochlear implantation, and compared this to age-matched hearing controls as well as to deaf and hearing adult data. Deaf and hearing children between the ages of 5 and 15 years were assessed using a new, specifically paediatric designed method of static perimetry. The deaf group (N = 25) were 14 females and 11 males, mean age 9.92 years (range 5-15 years). The hearing group (N = 64) were 34 females, 30 males, mean age 9.13 years (range 5-15 years). All participants had good visual acuity in both eyes (< 0.200 LogMAR). Accuracy of detection and reaction time to briefly presented LED stimuli of three light intensities, at eccentricities between 30° and 85° were measured while fixation was maintained to a central target. The study found reduced peripheral vision in deaf children between 5 and 10 years of age. Deaf children (aged 5-10 years) showed slower reaction times to all stimuli and reduced ability to detect and accurately report dim stimuli in the far periphery. Deaf children performed equally to hearing children aged 11-12 years. Deaf adolescents aged 13-15 years demonstrated faster reaction times to all peripheral stimuli in comparison to hearing controls. Adolescent results were consistent with deaf and hearing adult performances wherein deaf adults also showed significantly faster reaction times than hearing controls. Peripheral vision performance on this task was found to reach adult-like levels of maturity in deaf and hearing children, both in reaction time and accuracy of detection at the age of 11-12 years.
We present a behavioural task designed for the investigation of how novel instrumental actions are discovered and learnt. The task consists of free movement with a manipulandum, during which the full range of possible movements can be explored by the participant and recorded. A subset of these movements, the ‘target’, is set to trigger a reinforcing signal. The task is to discover what movements of the manipulandum evoke the reinforcement signal. Targets can be defined in spatial, temporal, or kinematic terms, can be a combination of these aspects, or can represent the concatenation of actions into a larger gesture. The task allows the study of how the specific elements of behaviour which cause the reinforcing signal are identified, refined and stored by the participant. The task provides a paradigm where the exploratory motive drives learning and as such we view it as in the tradition of Thorndike [1]. Most importantly it allows for repeated measures, since when a novel action is acquired the criterion for triggering reinforcement can be changed requiring a new action to be discovered. Here, we present data using both humans and rats as subjects, showing that our task is easily scalable in difficulty, adaptable across species, and produces a rich set of behavioural measures offering new and valuable insight into the action learning process.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.