Numerous studies have tested the hypothesis that facial identity and emotional expression are independently processed, but a solid conclusion has been difficult to reach, with the literature showing contradictory results. We argue that this is partly due to different researchers using different definitions of perceptual integration and independence, usually vague and/or simply operational, and also due to lack of proper stimulus control.Here, we performed a study using three-dimensional realistic computer-generated faces for which the discriminability of identities and expressions, the intensity of the expressions, and low-level features of the faces were controlled. A large number of participants, distributed across twelve experimental groups, performed identification tasks for the six basic emotional expressions and the neutral expression (between 2018 and 2019). A multidimensional signal detection model was utilized to analyze the data, which allowed us to dissociate between multiple formally defined notions of independence and holism. Results showed strong and robust violations of perceptual independence that were consistent across all experiments and suggest Gestalt-like perceptual integration of face identity and expression. To date, our results provide the strongest evidence for holistic/Gestalt processing found among face perception studies that have used formal definitions of independence and holism.