Irrelevant salient objects may capture our attention and interfere with visual search. Recently, it was shown that distraction by a salient object is reduced when it is presented more frequently at one location than at other locations. The present study investigates whether this reduced distractor interference is the result of proactive spatial suppression, implemented prior to display onset, or reactive suppression, occurring after attention has been directed to that location. Participants were asked to search for a shape singleton in the presence of an irrelevant salient color singleton which was presented more often at one location (the high-probability location) than at all other locations (the low-probability locations). On some trials, instead of the search task, participants performed a probe task, in which they had to detect the offset of a probe dot. The results of the search task replicated previous findings showing reduced distractor interference in trials in which the salient distractor was presented at the high-probability location as compared with the low-probability locations. The probe task showed that reaction times were longer for probes presented at the high-probability location than at the low-probability locations. These results indicate that through statistical learning the location that is likely to contain a distractor is suppressed proactively (i.e., prior to display onset). It suggests that statistical learning modulates the first feed-forward sweep of information processing by deprioritizing locations that are likely to contain a distractor in the spatial priority map.
Cognitive pupillometry is the measurement of pupil size to investigate cognitive processes such as attention, mental effort, working memory, and many others. Currently, there is no commonly agreed-upon methodology for conducting cognitive-pupillometry experiments, and approaches vary widely between research groups and even between different experiments from the same group. This lack of consensus makes it difficult to know which factors to consider when conducting a cognitive-pupillometry experiment. Here we provide a comprehensive, hands-on guide to methods in cognitive pupillometry, with a focus on trial-based experiments in which the measure of interest is the task-evoked pupil response to a stimulus. We cover all methodological aspects of cognitive pupillometry: experimental design, preprocessing of pupil-size data, and statistical techniques to deal with multiple comparisons when testing pupil-size data. In addition, we provide code and toolboxes (in Python) for preprocessing and statistical analysis, and we illustrate all aspects of the proposed workflow through an example experiment and example scripts.
Cognitive pupillometry is the measurement of pupil size to investigate cognitive processes such as attention, mental effort, working memory, and many others. Currently, there is no commonly agreed-upon methodology for conducting cognitive-pupillometry experiments, and approaches vary widely between research groups and even between different experiments from the same group. This lack of consensus makes it difficult to know which factors to consider when conducting a cognitive-pupillometry experiment. Here we provide a comprehensive, hands-on guide to methods in cognitive pupillometry, with a focus on trial-based experiments in which the measure of interest is the task-evoked pupil response to a stimulus. We cover all methodological aspects of cognitive pupillometry: experimental design; preprocessing of pupil-size data; and statistical techniques to deal with multiple comparisons when testing pupil-size data. In addition, we provide code and toolboxes (in Python) for preprocessing and statistical analysis, and we illustrate all aspects of the proposed workflow through an example experiment and example scripts.
Summary Psychological research on pseudo‐profound bullshit—randomly assembled buzz words plugged into a syntactic structure—has only recently begun. Most such research has focused on dispositional traits, such as thinking styles or political orientation. However, none has investigated contextual factors. In two studies, we introduce a new paradigm by investigating the contextual effect on pseudo‐profound bullshit and meaningful quotes. In Study 1, all participants rated the profundity of statements in three contexts: (a) isolated, (b) as allegedly said by a famous author, or (c) within a vignette (short story). Study 2 serves as a conceptual replication in which participants rated statements in only one of three contexts. Overall, our results demonstrate that although contextual information such as author's name increases the perceived profundity of bullshit, it has an inconsistent effect on meaningful quotes. The present study helps to better understand the bullshit receptivity while offering a new line of research.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.