We demonstrate that task relevance dissociates between visual awareness and knowledge activation to create a state of seeing without knowing-visual awareness of familiar stimuli without recognizing them. We rely on the fact that in order to experience a Kanizsa illusion, participants must be aware of its inducers. While people can indicate the orientation of the illusory rectangle with great ease (signifying that they have consciously experienced the illusion's inducers), almost 30% of them could not report the inducers' color. Thus, people can see, in the sense of phenomenally experiencing, but not know, in the sense of recognizing what the object is or activating appropriate knowledge about it. Experiment 2 tests whether relevance-based selection operates within objects and shows that, contrary to the pattern of results found with features of different objects in our previous studies and replicated in Experiment 1, selection does not occur when both relevant and irrelevant features belong to the same object. We discuss these findings in relation to the existing theories of consciousness and to attention and inattentional blindness, and the role of cognitive load, object-based attention, and the use of self-reports as measures of awareness.
Learning the structure of the environment (e.g., what usually follows what) enables animals to behave in an effective manner and prepare for future events. Unintentional learning is capable of efficiently producing such knowledge as has been demonstrated with the Artificial Grammar Learning paradigm (AGL), among others. It has been argued that selective attention is a necessary and sufficient condition for visual implicit learning. Experiment 1 shows that spatial attention is not sufficient for implicit learning. Learning does not occur if the stimuli instantiating the structure are task irrelevant. In a second experiment, we demonstrate that this holds even with abundance of available attentional resources. Together, these results challenge the current view of the relations between attention, resources, and implicit learning.
Visual working memory (VWM) is traditionally assumed to be immune to proactive interference (PI). However, in a recent study (Endress & Potter, 2014), performance in a visual memory task was superior when all items were unique and hence interference from previous trials was impossible, compared to a standard condition in which a limited set of repeating items was used and stimuli from previous trials could interfere with the current trial. Furthermore, when all the items were unique, the estimated memory capacity far exceeded typical capacity estimates. Consequently, the researchers suggested the existence of a separate memory buffer, the “temporary memory,” which has an unbounded capacity for meaningful items. However, before accepting this conclusion, methodological differences between the repeated-unique procedure and typical estimates of VWM should be considered. Here, we tested the extent to which the exceptional set of heterogeneous, complex, meaningful real-world objects contributed to the large PI in the repeated-unique procedure. Thus, the same paradigm was employed with a set of real-world objects and with homogenous sets (e.g., houses, faces) in which the items were meaningful, yet less visually distinct, and participants had to rely on subtle visual details to perform the task. The results revealed a large PI effect for real-world heterogeneous objects, but substantially smaller effects for the homogenous sets. These findings suggest that there is no need to postulate a new memory buffer. Instead, we suggest that VWM capacity and vulnerability to PI are highly influenced by task characteristics, and specifically, by the stimuli distinctiveness.
Faces are one of the most important signals for reading people's mental states. In sync with their apparent "chronic" (cross-situational) relevance, faces have been argued to be processed independently of the task one is currently performing. Many of these demonstrations have involved "capture of attention" or increased interference by faces functioning as distractors. Here we ask whether multiple repetitions of task irrelevant faces leave a trace in the system. Specifically, we tested whether repeating structures instantiated by task irrelevant faces are unintentionally or implicitly learned. Our findings indicate that although faces are indeed unique in that they are the only stimulus found to lead to implicit learning of complex rules when irrelevant, such learning is small in magnitude. Although these results support the conjecture that task irrelevant faces are processed, the functional significance of this learning needs to be assessed.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.