Training or exposure to a visual feature leads to a long-term improvement in
performance on visual tasks that employ this feature. Such performance improvements and
the processes that govern them are called visual perceptual learning (VPL). As an ever
greater volume of research accumulates in the field, we have reached a point where a
unifying model of VPL should be sought. A new wave of research findings has exposed
diverging results along three major directions in VPL: specificity versus generalization
of VPL, lower versus higher brain locus of VPL, and task-relevant versus task-irrelevant
VPL. In this review, we propose a new theoretical model that suggests the involvement of
two different stages in VPL: a low-level, stimulus-driven stage, and a higher-level stage
dominated by task demands. If experimentally verified, this model would not only
constructively unify the current divergent results in the VPL field, but would also lead
to a significantly better understanding of visual plasticity, which may, in turn, lead to
interventions to ameliorate diseases affecting vision and other pathological or
age-related visual and nonvisual declines.