Virtual Reality (VR) contents comprise 360°×180° seamless panoramic videos, stitched from multiple overlapping video streams. A recent increase in demand of VR content has resulted in the supply of commercially available solutions which, though cheap, lack scalability and quality. In this paper, we propose an end-to-end VR system for stitching full spherical contents. The VR system is composed of camera rig calibration and stitching modules. The calibration module performs geometric alignment of camera rig. The stitching module transforms texture from camera or video stream into VR stream using lookup tables (LUTs) and blend masks (BMs). In this work, our main contribution is improvement of stitching quality. First, we propose a feature preprocessing method that filters out inconsistent, error-prone features. Secondly, we propose a geometric alignment method that outperforms state-of-the-art VR stitching solutions. We tested our system on diverse image sets and obtained state-of-the-art geometric alignment. Moreover, we achieved real-time stitching of camera and video streams up to 120 fps at 4K resolution. After stitching, we encode VR content for IP multicasting.