Background: The iPhone X (Apple, Inc., Cupertino, Calif.) is the first smartphone to be released with a high-fidelity three-dimensional scanner. At present, half of all U.S. smartphone users use an iPhone. Recent data suggest that the majority of these 230 million individuals will upgrade to the iPhone X within 2 years. This represents a profound expansion in access to three-dimensional scanning technology, not only for plastic surgeons but for their patients as well. The purpose of this study was to compare the iPhone X scanner against a popular, portable three-dimensional camera used in plastic surgery (Canfield Vectra H1; Canfield Scientific, Inc., Parsippany, N.J.). Methods: Sixteen human subjects underwent three-dimensional facial capture with the iPhone X and Canfield Vectra H1. Results were compared using color map analysis and surface distances between key anatomical landmarks. To assess repeatability and precision of the iPhone X three-dimensional scanner, six facial scans of a single participant were obtained and compared using color map analysis. In addition, three-dimensionally–printed facial masks (n = 3) were captured with each device and compared. Results: For the experiments, average root mean square was 0.44 mm following color map analysis and 0.46 mm for surface distance between anatomical landmarks. For repeatability and precision testing, average root mean square difference following color map analysis was 0.35 mm. For the three-dimensionally–printed facial mask comparison, average root mean square difference was 0.28 mm. Conclusions: The iPhone X offers three-dimensional scanning that is accurate and precise to within 0.5 mm when compared to a commonly used, validated, and expensive three-dimensional camera. This represents a significant reduction in the barrier to access to three-dimensional scanning technology for both patients and surgeons.
Digital twins of real environments are valuable tools for generating realistic synthetic data and performing simulations with artificial intelligence and machine learning models. Creating digital twins of urban, on-road environments have been extensively researched in the light of rising momentum in urban planning and autonomous vehicle systems; yet creating digital twins of rugged, off-road environments such as forests, farms, and mountainous areas is still poorly studied. In this work, we propose a pipeline to produce digital twins of off-road environments with a focus on modeling vegetation and uneven terrain. A point cloud map of the off-road environment is first reconstructed using LiDAR scans paired with scan registration algorithms. Terrain segmentation, vegetation segmentation, and Euclidean clustering are applied to separate point cloud objects into individual entities within the digital twin model. Experimental validation is carried out using LiDAR scans collected from an off-road proving ground at the Center of Advanced Vehicular Systems (CAVS) in Mississippi State University. A prototype system is demonstrated with the Mississippi State University Autonomous Vehicle Simulator (MAVS), and the source code and data are publicly available * . The proposed framework has a wide range of applications including virtual autonomous vehicle testing, synthetic data generation, and training of AI models.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.