Autonomous mobile vehicles need advanced systems to determine their exact position in a certain coordinate system. For this purpose, the GPS and the vision system are the most often used. These systems have some disadvantages, for example, the GPS signal is unavailable in rooms and may be inaccurate, while the vision system is strongly dependent on the intensity of the recorded light. This paper assumes that the primary system for determining the position of the vehicle is wheel odometry joined with an IMU (Internal Measurement Unit) sensor, which task is to calculate all changes in the robot orientations, such as yaw rate. However, using only the results coming from the wheels system provides additive measurement error, which is most often the result of the wheels slippage and the IMU sensor drift. In the presented work, this error is reduced by using a vision system that constantly measures vehicle distances to markers located in its space. Additionally, the paper describes the fusion of signals from the vision system and the wheels odometry. Studies related to the positioning accuracy of the vehicle with both the vision system turned on and off are presented. The laboratory averaged positioning accuracy result was reduced from 0.32 m to 0.13 m, with ensuring that the vehicle wheels did not experience slippage. The paper also describes the performance of the system during a real track driven, where the assumption was not to use the GPS geolocation system. In this case, the vision system assisted in the vehicle positioning and an accuracy of 0.2 m was achieved at the control points.