A well-established phenomenon in the study of attention is the attentional blink-a deficit in reporting the second of two targets when it occurs 200-500 msec after the first. Although the effect has been shown to be robust in a variety of task conditions, not every individual participant shows the effect. We measured electroencephalographic activity for "nonblinkers" and "blinkers" during execution of a task in which two letters had to be detected in an sequential stream of digit distractors. Nonblinkers showed an earlier P3 peak, suggesting that they are quicker to consolidate information than are blinkers. Differences in frontal selection positivity were also found, such that nonblinkers showed a larger difference between target and distractor activation than did blinkers. Nonblinkers seem to extract target information better than blinkers do, allowing them to reject distractors more easily and leaving sufficient resources available to report both targets.
Three experiments tested whether the attentional blink (AB; a deficit in reporting the second of two targets when it occurs 200-500 msec after the first) can be attenuated by providing information about the target onset asynchrony (TOA) of the second target relative to the first. Blocking the TOA did not improve second-target performance relative to a condition in which the TOA varied randomly from trial to trial (Experiment 1). In contrast, explicitly cuing the TOA on a trial-by-trial basis attenuated the AB without a cost to first-target identification (Experiments 2 and 3). The results suggest that temporal cues influence the allocation of attentional resources by adding temporal information to the perceptual description of the second target that can then be used to filter targets from nontargets, resulting in enhanced accuracy.
A valid interpretation of most statistical techniques requires that one or more assumptions be met. In published articles, however, little information tends to be reported on whether the data satisfy the assumptions underlying the statistical techniques used. This could be due to self-selection: Only manuscripts with data fulfilling the assumptions are submitted. Another explanation could be that violations of assumptions are rarely checked for in the first place. We studied whether and how 30 researchers checked fictitious data for violations of assumptions in their own working environment. Participants were asked to analyze the data as they would their own data, for which often used and well-known techniques such as the t-procedure, ANOVA and regression (or non-parametric alternatives) were required. It was found that the assumptions of the techniques were rarely checked, and that if they were, it was regularly by means of a statistical test. Interviews afterward revealed a general lack of knowledge about assumptions, the robustness of the techniques with regards to the assumptions, and how (or whether) assumptions should be checked. These data suggest that checking for violations of assumptions is not a well-considered choice, and that the use of statistics can be described as opportunistic.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.