Mixed Precision techniques have been successfully applied to improve the performance and energy efficiency of computation in embedded and high performance systems. However, few solutions have been proposed that address precision tuning of both GPGPU code and its corresponding CPU code, limiting the gains achievable by mixed precision. We propose an extension to the taffo precision tuning toolset that enables Mixed Precision across the space of floating and fixed point data types on GPGPUs, leveraging static analysis and providing seamless interface adaptation between host and GPGPU kernel code. The proposed tool achieves speedups exceeding 2× by exploiting the optimization of both kernel and host code.