Mobile Augmented Reality (MAR) is designed to keep pace with high-end mobile computing and their powerful sensors. This evolution excludes users with low-end devices and network constraints. This article presents ModAR, a hybrid Android prototype that expands the MAR experience to the aforementioned target group. It combines feature-based image matching and pose estimation with fast rendering of 3D textured models. Planar objects of the real environment are used as pattern images for overlaying users’ meshes or the app’s default ones. Since ModAR is based on the OpenCV C++ library at Android NDK and OpenGL ES 2.0 graphics API, there are no dependencies on additional software, operating system version or model-specific hardware. The developed 3D graphics engine implements optimized vertex-data rendering with a combination of data grouping, synchronization, sub-texture compression and instancing for limited CPU/GPU resources and a single-threaded approach. It achieves up to 3 × speed-up compared to standard index rendering, and AR overlay of a 50 K vertices 3D model in less than 30 s. Several deployment scenarios on pose estimation demonstrate that the oriented FAST detector with an upper threshold of features per frame combined with the ORB descriptor yield best results in terms of robustness and efficiency, achieving a 90% reduction of image matching time compared to the time required by the AGAST detector and the BRISK descriptor, corresponding to pattern recognition accuracy of above 90% for a wide range of scale changes, regardless of any in-plane rotations and partial occlusions of the pattern.
In the context of web augmented reality (AR), 3D rendering that maintains visual quality and frame rate requirements remains a challenge. The lack of a dedicated and efficient 3D format often results in the degraded visual quality of the original data and compromises the user experience. This paper examines the integration of web-streamable view-dependent representations of large-sized and high-resolution 3D models in web AR applications. The developed cross-platform prototype exploits the batched multi-resolution structures of the Nexus.js library as a dedicated lightweight web AR format and tests it against common formats and compression techniques. Built with AR.js and Three.js open-source libraries, it allows the overlay of the multi-resolution models by interactively adjusting the position, rotation and scale parameters. The proposed method includes real-time view-dependent rendering, geometric instancing and 3D pose regression for two types of AR: natural feature tracking (NFT) and location-based positioning for large and textured 3D overlays. The prototype achieves up to a 46% speedup in rendering time compared to optimized glTF models, while a 34 M vertices 3D model is visible in less than 4 s without degraded visual quality in slow 3D networks. The evaluation under various scenes and devices offers insights into how a multi-resolution scheme can be adopted in web AR for high-quality visualization and real-time performance.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.