In this paper, an overview is given of how the path from vision to motion has been developed in the TechUnited team. The vision module includes: (i) color calibration using a union of convex hulls to select an area in the 3D-colorspace, (ii) automatic calibration of the mapping from the camera image to the field via a genetic algorithm, (iii) self localization based on field lines. The output of the vision module is used by the motion module which includes: (i) vision and encoder sensor fusion by monitoring the drift caused by odometry, (ii) generating a motion path complying with the robot's limitations to prevent wheel slippage, (iii) collocated motion control. In contrast to closing the loop on vision, our approach uses wheel encoders as the basis for motion control, which has several advantages such as less delay due to a higher sampling frequency. Vision is only used to compensate for slow drift caused by slip in the wheel-surface contact.