The digital 3D representation of manufactured parts plays a crucial role in the quality assurance of small, individualized lot sizes. Machine vision systems, such as optical 3D scanners, guided by an industrial robotic arm, allow for a contactless full digital reconstruction of surface geometries. In order to digitize the whole geometry, it is necessary to acquire 3D scans from multiple viewpoints with respect to the part, which in combination cover the entire surface. With efficiency in mind, this results in an optimization problem between a high surface area coverage and low measurement effort, referred to as the view planning problem. In the presented work, two popular viewpoint candidate generation methods are implemented: Firstly, a surface-based random sampling method, which generates viewpoints within a solution space, in which visibility of a given model surface can be expected. Secondly, a view sphere viewpoint generation approach, which is independent of the object geometry but avoids clustering by generating evenly spaced viewpoints on a sphere around the centre of the object. Using an adjustable remeshing procedure, a multi-stage approach is implemented by generating multiple meshes with different resolutions. Through this, the benefits of working on a coarse mesh, such as fast viewpoint candidate evaluation and selection, are combined with the level of detail of a fine mesh. It is found that it is possible to considerably reduce the mesh resolution while maintaining a reasonably high surface area coverage on a reference model. Applying the proposed procedure to the view planning for a state-of-the-art 3D fringe light projection scanner with highly sophisticated scanning capabilities, it is demonstrated that the view sphere approach is more suitable for this use-case due to the large measurement volumes of the 3D scanner. The frequently used random sampling approach requires an excessively higher computational effort to achieve similar results in comparison.