Abstract. Two models for visual pattern recognition are described ; the one based on application of internal compensatory transformations to pattern representations, the other based on encoding of patterns in terms of local features and spatial relations between these local features. These transformation and relational-structure models are each endowed with the same experimentally observed invariance properties, which include independence to pattern translation and pattern jitter, and, depending on the particular versions of the models, independence to pattern reflection and inversion (180" rotation). Each model is tested by comparing the predicted recognition performance with experimentally determined recognition performance using as stimuli random-dot patterns that were variously rotated in the plane. The level of visual recognition of such patterns is known to depend strongly on rotation angle. It is shown that the relationalstructure model equipped with an invariance to pattern inversion gives responses which are in close agreement with the experimental data over all pattern rotation angles. In contrast, the transformation model equipped with the same invariances gives poor agreement to the experimental data. Some implications of these results are considered. mations, for example, translations and dilatations.