Virtual Reality and Free Viewpoint navigation require highquality rendered images to be realistic. Current hardware assisted raytracing methods cannot reach the expected quality in real-time and are also limited by the 3D mesh quality. An alternative is depth image-based rendering (DIBR) where the input only consists of images and their associated depth maps for synthesizing virtual views to the Head Mounted Display (HMD). The MPEG Immersive Video (MIV) standard uses such a DIBR algorithm called the Reference View Synthesizer (RVS). We have first implemented a GPU version, called the Real-time Accelerated View Synthesizer (RaViS), that synthesizes two virtual views in real-time for the HMD. In the present paper, we explore the differences between desktop and embedded GPU platforms, porting RaViS to an embedded HMD without the need for a separate, discrete desktop GPU. The proposed solution gives a first insight into DIBR View Synthesis techniques in embedded HMDs using OpenGL and Vulkan, a cross-platform 3D rendering library with support for embedded devices.