Our ability to recognize faces regardless of viewpoint is a key property of the primate visual system. Traditional theories hold that facial viewpoint is represented by view-selective mechanisms at early visual processing stages and that representations become increasingly tolerant to viewpoint changes in higher-level visual areas. Newer theories, based on single-neuron monkey electrophysiological recordings, suggest an additional intermediate processing stage invariant to mirror-symmetric face views. Consistent with traditional theories, human studies combining neuroimaging and multivariate pattern analysis (MVPA) methods have provided evidence of view-selectivity in early visual cortex. However, contradictory results have been reported in higher-level visual areas concerning the existence in humans of mirror-symmetrically tuned representations. We believe these results reflect low-level stimulus confounds and data analysis choices. To probe for low-level confounds, we analyzed images from two popular face databases. Analyses of mean image luminance and contrast revealed biases across face views described by even polynomials—i.e., mirror-symmetric. To explain major trends across human neuroimaging studies of viewpoint selectivity, we constructed a network model that incorporates three biological constraints: cortical magnification, convergent feedforward projections, and interhemispheric connections. Given the identified low-level biases, we show that a gradual increase of interhemispheric connections across network layers is sufficient to replicate findings of mirror-symmetry in high-level processing stages, as well as view-tuning in early processing stages. Data analysis decisions—pattern dissimilarity measure and data recentering—accounted for the variable observation of mirror-symmetry in late processing stages. The model provides a unifying explanation of MVPA studies of viewpoint selectivity. We also show how common analysis choices can lead to erroneous conclusions.