Dense surface registration, commonly used in computer science, could aid the biological sciences in accurate and comprehensive quantification of biological phenotypes. However, few toolboxes exist that are openly available, non-expert friendly, and validated in a way relevant to biologists. Here, we report a customizable toolbox for reproducible high-throughput dense phenotyping of 3D images, specifically geared towards biological use. Given a target image, a template is first oriented, repositioned, and scaled to the target during a scaled rigid registration step, then transformed further to fit the specific shape of the target using a non-rigid transformation. As validation, we use n = 41 3D facial images to demonstrate that the MeshMonk registration is accurate, with 1.26 mm average error, across 19 landmarks, between placements from manual observers and using the MeshMonk toolbox. We also report no variation in landmark position or centroid size significantly attributable to landmarking method used. Though validated using 19 landmarks, the MeshMonk toolbox produces a dense mesh of vertices across the entire surface, thus facilitating more comprehensive investigations of 3D shape variation. This expansion opens up exciting avenues of study in assessing biological shapes to better understand their phenotypic variation, genetic and developmental underpinnings, and evolutionary history.
Relative contributions to landmark variation. The experimental design employed is a complete randomized block design with repeated measures, thus the magnitude of variance within groups is attributable to differences made by the observer on the same object, or in our case the differences made by the MeshMonk toolbox in registering the same image multiple times (MeshMonk precision). This design also allows us to partition the magnitude of variance attributable to replicate images of the same person (participant error), and that attributable to the camera system (camera error). For these calculations we used a repeated measures ANOVA on the non-scaled and non-reflected GPA-aligned coordinates, with camera as a fixed effect and individual and replicate as random effects 40-42. For the set of 19 traditional landmarks, an ANOVA using the non-scaled and non-reflected GPA-aligned coordinates was similarly performed, with camera as a fixed effect and individual as a random effect.
Introduction: Although stereo-photogrammetry is increasingly popular for scanning faces three-dimensionally, commercial solutions remain quite expensive, limiting their accessibility. We propose a more affordable, custom-built photogrammetry setup (SF3D), and evaluate its variability within-and between-systems. Methods: 29 subjects and a mannequin head were imaged three times using SF3D and a commercially available system (3dMDFace). Next, an anthropometric mask was mapped visco-elastically onto the reconstructed meshes using Meshmonk. Withinsystems shape variability was determined by calculating the RMSE of the Procrustes distance between each of the three subject's scans and the subject's ground truth (calculated by averaging the mappings after a non-scaled generalized Procrustes superimposition). Inter-system variability was determined by similarly comparing the ground truth mappings of both systems. Two-factor Procrustes ANOVA was used to partition the inter-system shape variability to better understand the source of the discrepancies between the facial shapes acquired by both systems. Results: The RMSE of the within-system shape variability for 3dMDFace and SF3F were 0.52 +/-0.07 mm, and 0.45 +/-0.17 mm, respectively. The corresponding values for the mannequin head were 0.42 +/-0.02 mm, and 0.29 +/-0.03 mm, respectively. The between-systems RMSE was 1.6 +/-0.34 mm for the study group, and 1.38 mm for the mannequin head. Two-factor ANOVA indicated that variability attributable to the system was expressed mainly at the upper eyelids, nasal tip and alae, and chin. Conclusions: The variability values of the custom-built setup presented here were competitive to a state-of-the-art commercial system at a more affordable level of investment.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.