This paper presents a keypoint detection method based on the Laplacian of Gaussian (LoG). In contrast to the Difference of Gaussian (DoG)-based keypoint detection method used in Scale Invariant Feature Transform (SIFT), we focus on the LoG operator and its higher order derivatives. We provide mathematical analogies between higher order DoG (HDoG) and higher order LoG (HLoG) and experimental results to show the effectiveness of the proposed HLoG-based keypoint detection method. The performance of the HLoG is evaluated with four different tests: i) a repeatability test of the keypoints detected across images under various transformations, ii) image retrieval, iii) panorama stitching and iv) 3D reconstruction. The proposed HLoG method provides comparable performance to HDoG and the combination of HLoG and HDoG provides significant improvements in various keypoint-related computer vision problems.
Virtual Reality (VR) contents comprise 360°×180° seamless panoramic videos, stitched from multiple overlapping video streams. A recent increase in demand of VR content has resulted in the supply of commercially available solutions which, though cheap, lack scalability and quality. In this paper, we propose an end-to-end VR system for stitching full spherical contents. The VR system is composed of camera rig calibration and stitching modules. The calibration module performs geometric alignment of camera rig. The stitching module transforms texture from camera or video stream into VR stream using lookup tables (LUTs) and blend masks (BMs). In this work, our main contribution is improvement of stitching quality. First, we propose a feature preprocessing method that filters out inconsistent, error-prone features. Secondly, we propose a geometric alignment method that outperforms state-of-the-art VR stitching solutions. We tested our system on diverse image sets and obtained state-of-the-art geometric alignment. Moreover, we achieved real-time stitching of camera and video streams up to 120 fps at 4K resolution. After stitching, we encode VR content for IP multicasting.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.