In this thesis, we present a complete framework to inverse render faces with a 3D Morphable Model. By decomposing the image formation process into a geometric and photometric part, we are able to state the problem as a multilinear system which can be solved accurately and efficiently. As we treat each contribution as independent, the objective function is convex in the parameters and a globally optimal solution can be found. We start by recovering 3D shape using a novel algorithm which incorporates generalisation errors of the model obtained from empirical measurements. The algorithm is extended so it can efficiently deal with mixture distributions. We then describe three methods to recover facial texture, and for the second and third, diffuse lighting, specular reflectance and camera properties from a single image. These methods make increasingly weak assumptions and can all be solved in a linear fashion. We further modify our framework so it accounts for global illumination effects. This is achieved by incorporating statistical models for ambient occlusion and bent normals into the image formation model. We show that solving for ambient occlusion and bent normal parameters as part of the fitting process improves the accuracy of the estimated texture map and illumination environment. We present results on challenging data, rendered under complex natural illumination with both specular reflectance and occlusion of the illumination environment. We evaluate our findings on publicly available datasets, where we are able to obtain state-ofthe-art results. Finally, we present a practical method to synthesise a larger population from a small training-set and show how the new instances can be used to build a flexible PCA model.