We deployed the Multiple Necessary Cues (MNC) discrimination task to see if pigeons can simultaneously attend to four different dimensions of complex visual stimuli. Specifically, we trained nine pigeons (Columba livia) on a go/no go discrimination to peck only 1 of 16 compound stimuli created from all possible combinations of two stimulus values from four separable visual dimensions: shape (circle/square), size (large/small), line orientation (horizontal/vertical), and brightness (dark/light). Some of the pigeons had CLHD (circle, large, horizontal, dark) as the positive stimulus (Sþ), whereas others had SSVL (square, small, vertical, light) as the Sþ. We recorded touchscreen pecking during the first 15 s that each stimulus was presented on each training trial. Discrimination training continued until pigeons' rates of responding to all 15 negative stimuli (S-s) fell to less than 15% of their response rates to the Sþ. All pigeons acquired the MNC discrimination, suggesting that they attended to all four dimensions of the multidimensional stimuli. Learning rate was similar for all four dimensions, indicating equivalent salience of the discriminative stimuli. The more dimensions along which the S-s differed from the Sþ, the faster was discrimination learning, suggesting an added benefit from increasing perceptual disparities of the S-s from the Sþ. Finally, evidence of attentional tradeoffs among the four dimensions was seen during discrimination learning, raising interesting questions concerning the possible control of behavior by elemental and configural stimuli.
Repeated pairings of a particular visual context with a specific location of a target stimulus facilitate target search in humans. We explored an animal model of this contextual cueing effect using a novel Cueing-Miscueing design. Pigeons had to peck a target which could appear in one of four possible locations on four possible color backgrounds or four possible color photographs of real-world scenes. On 80% of the trials, each of the contexts was uniquely paired with one of the target locations; on the other 20% of the trials, each of the contexts was randomly paired with the remaining target locations. Pigeons came to exhibit robust contextual cueing when the context preceded the target by 2 s, with reaction times to the target being shorter on correctly-cued trials than on incorrectly-cued trials. Contextual cueing proved to be more robust with photographic backgrounds than with uniformly colored backgrounds. In addition, during the context-target delay, pigeons predominately pecked toward the location of the upcoming target, suggesting that attentional guidance contributes to contextual cueing. These findings confirm the effectiveness of animal models of contextual cueing and underscore the important part played by associative learning in producing the effect.
In meta-analyses, it is customary to compute a confidence interval for the overall mean effect (ρ̄ or δ̄), but not for the underlying standard deviation (τ) or the lower bound of the credibility value (90%CV), even though the latter entities are often as important to the interpretation as is the overall mean. We introduce 2 methods of computing confidence intervals for the lower bound (Lawless and bootstrap). We compare both methods using 3 lower bound estimators (Schmidt-Hunter, Schmidt-Hunter with k correction, and Morris/Hedges, labeled HOVr/HOVd) in 2 Monte Carlo studies (1 for correlations and 1 for standardized mean differences) and illustrate their application to published meta-analyses. For correlations, we found that HOVr bootstrap confidence intervals yielded coverage greater than 90% across a wide variety of conditions provided that there were at least 25 studies. For the standardized mean difference, all 3 methods produced acceptable results using the bootstrap for the lower bound confidence interval provided that there were at least 20 studies with an average sample size of at least 50. When the number of studies was small (k = 8 or 10), coverage was less than 90% and confidence intervals were very wide. Even with larger numbers of studies, if there was indirect range restriction coupled with a small underlying between-studies variance, the between-studies variance was poorly estimated and coverage of the lower bound suffered. We provide software to allow meta-analysts to compute bootstrap confidence intervals for the estimators described in the article.
Repeated pairings of a particular visual context with a specific location of a target stimulus facilitate target search in humans. We explored an animal model of such contextual cueing. Pigeons had to peck a target which could appear in one of four locations on color photographs of real-world scenes. On half of the trials, each of four scenes was consistently paired with one of four possible target locations; on the other half of the trials, each of four different scenes was randomly paired with the same four possible target locations. In Experiments 1 and 2, pigeons exhibited robust contextual cueing when the context preceded the target by 1 s to 8 s, with reaction times to the target being shorter on predictive-scene trials than on random-scene trials. Pigeons also responded more frequently during the delay on predictive-scene trials than on random-scene trials; indeed, during the delay on predictive-scene trials, pigeons predominately pecked toward the location of the upcoming target, suggesting that attentional guidance contributes to contextual cueing. In Experiment 3, involving left-right and top-bottom scene reversals, pigeons exhibited stronger control by global than by local scene cues. These results attest to the robustness and associative basis of contextual cueing in pigeons.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.