Designing a 3D game scene is a tedious task that often requires a substantial amount of work. Typically, this task involves synthesis, coloring, and placement of 3D models within the game scene. To lessen this workload, we can apply machine learning to automate some aspects of the game scene development. Earlier research has already tackled automated generation of the game scene background with machine learning. However, model auto-coloring remains an underexplored problem. The automatic coloring of a 3D model is a challenging task, especially when dealing with the digital representation of a colorful, multipart object. In such a case, we have to "understand" the object's composition and coloring scheme of each part. Existing single-stage methods have their own caveats such as the need for segmentation of the object or generating individual parts that have to be assembled together to yield the final model. We address these limitations by proposing a two-stage training approach to synthesize auto-colored 3D models. In the first stage, we obtain a 3D point cloud representing a 3D object, whilst in the second stage, we assign colors to points within such cloud. Next, by leveraging the so-called triangulation trick, we generate a 3D mesh in which the surfaces are colored based on interpolation of colored points representing vertices of a given mesh triangle. This approach allows us to generate a smooth coloring scheme. Experimental evaluation shows that our two-stage approach gives better results in terms of shape reconstruction and coloring when compared to traditional single-stage techniques.