The current technological revolution driven by advances in machine learning has motivated a wide range of applications aiming to improve our quality of life. Representative of such applications are autonomous and semiautonomous Powered Wheelchairs (PWs), where the focus is on providing a degree of autonomy to the wheelchair user as a matter of guidance and interaction with the environment. Based on these perspectives, the focus of the current research has been on the design of lightweight systems that provide the necessary accuracy in the navigation system while enabling an embedded implementation. This motivated us to develop a real-time measurement methodology that relies on a monocular RGB camera to detect the caregiver’s feet based on a deep learning method, followed by the distance measurement of the caregiver from the PW. An important contribution of this article is the metrological characterization of the proposed methodology in comparison with measurements made with dedicated depth cameras. Our results show that despite shifting from 3D imaging to 2D imaging, we can still obtain comparable metrological performances in distance estimation as compared with Light Detection and Ranging (LiDAR) or even improved compared with stereo cameras. In particular, we obtained comparable instrument classes with LiDAR and stereo cameras, with measurement uncertainties within a magnitude of 10 cm. This is further complemented by the significant reduction in data volume and object detection complexity, thus facilitating its deployment, primarily due to the reduced complexity of initial calibration, positioning, and deployment compared with three-dimensional segmentation algorithms.