Image processing and vision-based navigation algorithms require images for design, testing, and validation. For space exploration purposes, it is complex if not impossible to retrieve realistic images. To mitigate this, two approaches can be used: high-fidelity rendering of celestial bodies or hardware-in-the-loop testing. In this work, we focus on the latter by elaborating on the design, implementation, validation, and calibration of a vision-based navigation test bench called TinyV3RSE . The design of such facility has been a collaborative effort at the Deep-space Astrodynamics Research & Technology (DART) group, which will benefit from its usage in various projects and missions in which is involved. In this work, for the first time, we present the facility design, the current calibration procedure, and also some preliminary results. These are focused on the image processing in a small-body mission and on the performance of a traditional and well-known optical navigation algorithm about the Moon.