According to the Theory of Event Coding (TEC; Hommel, Müsseler, Aschersleben, & Prinz, 2001), action and perception are represented in a shared format in the cognitive system by means of feature codes. In implicit sequence learning research, it is still common to make a conceptual difference between independent motor and perceptual sequences. This supposedly independent learning takes place in encapsulated modules (Keele, Ivry, Mayr, Hazeltine, & Heuer 2003) that process information along single dimensions. These dimensions have remained underspecified so far. It is especially not clear whether stimulus and response characteristics are processed in separate modules. Here, we suggest that feature dimensions as they are described in the TEC should be viewed as the basic content of modules of implicit learning. This means that the modules process all stimulus and response information related to certain feature dimensions of the perceptual environment. In 3 experiments, we investigated by means of a serial reaction time task the nature of the basic units of implicit learning. As a test case, we used stimulus location sequence learning. The results show that a stimulus location sequence and a response location sequence cannot be learned without interference (Experiment 2) unless one of the sequences can be coded via an alternative, nonspatial dimension (Experiment 3). These results support the notion that spatial location is one module of the implicit learning system and, consequently, that there are no separate processing units for stimulus versus response locations. (PsycINFO Database Record
An important question in implicit sequence learning research is how the learned information is represented. In earlier models, the representations underlying implicit learning were viewed as being either purely motor or perceptual. These different conceptions were later integrated by multidimensional models such as the Dual System Model of Keele et al. (Psychol Rev 110(2):316-339, 2003). According to this model, different types of sequential information can be learned in parallel, as long as each sequence comprised only one single dimension (e.g., shapes, colors, or response locations). The term dimension, though, is underspecified as it remains an open question whether the involved learning modules are restricted to motor or to perceptual information. This study aims to show that the modules of the implicit learning system are not specific to motor or perceptual processing. Rather, each module processes an abstract feature code which represents both response- and perception-related information. In two experiments, we showed that perceiving a stimulus-location sequence transferred to a motor response-location sequence. This result shows that the mere perception of a sequential feature automatically leads to an activation of the respective motor feature, supporting the notion of abstract feature codes being the basic modules of the implicit learning system. This result could only be obtained, though, when the task instructions emphasized the encoding of the stimulus-locations as opposed to an encoding of the color features. This limitation will be discussed taking into account the importance of the instructed task set.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.