In this paper, the authors describe the real-time CPU-based implementation of the virtual view synthesis algorithm for high-resolution omnidirectional content. The proposed method allows a user of the immersive video virtually navigating within the scene captured by a multiview system comprised of 360-degree or 180-degree cameras. The proposed method does not require using powerful graphic cards as other state-of-the-art real-time synthesis methods. Instead, the emerge of consumer-grade multithreaded CPUs and CPU-based virtual view synthesis, allows further development of cheap, consumer immersive video systems. The proposed method was compared with the state-of-the-art view synthesis algorithm – RVS, both in terms of quality of synthesized views and computational time required for the synthesis, presenting the usefulness of the proposed method.