Numerous studies have tested the hypothesis that facial identity and emotional expression are independently processed, but a solid conclusion has been difficult to reach, with the literature showing contradictory results. We argue that this is partly due to different researchers using different definitions of holism and independence, usually vague and/or simply operational, and also due lack of proper stimulus control. Here, we performed a study using 3-D realistic computer-generated faces for which the discriminability of identities and expressions, the intensity of the expressions, and low-level features of the faces were controlled. A large number of participants, distributed across twelve experimental groups, performed identification tasks for the six basic emotional expressions and the neutral expression. A multidimensional signal detection model was utilized to analyze the data, which allowed us to dissociate between multiple formally-defined notions of independence and holism. Results showed strong and robust violations of perceptual independence that were consistent across all experiments and suggest holistic processing of face identity and expression. To date, our results provide the strongest evidence for holistic/Gestalt processing found among face perception studies that have used formal definitions of holism. Perceptual separability results were inconsistent for most expressions across the identity set, except for the case of anger, which showed complete perceptual separability from identity, and happiness, which was perceptually separable from identity, but not the other way around. As in previous studies using formal definitions, we consistently found a form of holism at the level of decisional rather than perceptual processes, which underscores the importance of using tasks and analyses that can dissociate between the two.