In this paper, a multi-frame based homography estimation method is proposed for video stitching in static camera environments. A homography that is robust against spatio-temporally induced noise can be estimated by intervals, using feature points extracted during a predetermined time interval. The feature point with the largest blob response in each quantized location bin, a representative feature point, is used for matching a pair of video sequences. After matching representative feature points from each camera, the homography for the interval is estimated by random sample consensus (RANSAC) on the matched representative feature points, with their chances of being sampled proportional to their numbers of occurrences in the interval. The performance of the proposed method is compared with that of the per-frame method by investigating alignment distortion and stitching scores for daytime and noisy video sequence pairs. It is shown that alignment distortion in overlapping regions is reduced and the stitching score is improved by the proposed method. The proposed method can be used for panoramic video stitching with static video cameras and for panoramic image stitching with less alignment distortion.
In this paper, we propose a semantic segmentation-based static video stitching method to reduce parallax and misalignment distortion for sports stadium scenes with dynamic foreground objects. First, video frame pairs for stitching are divided into segments of different classes through semantic segmentation. Region-based stitching is performed on matched segment pairs, assuming that segments of the same semantic class are on the same plane. Second, to prevent degradation of the stitching quality of plain or noisy videos, the homography for each matched segment pair is estimated using the temporally consistent feature points. Finally, the stitched video frame is synthesized by stacking the stitched matched segment pairs and the foreground segments to the reference frame plane by descending order of the area. The performance of the proposed method is evaluated by comparing the subjective quality, geometric distortion, and pixel distortion of video sequences stitched using the proposed and conventional methods. The proposed method is shown to reduce parallax and misalignment distortion in segments with plain texture or large parallax, and significantly improve geometric distortion and pixel distortion compared to conventional methods.
The present study investigated the natural convection for a hot circular cylinder embedded in a cold square enclosure. The numerical simulations are performed to solve a two-dimensional steady natural convection for three Rayleigh numbers of 103, 104 and 105 at a fixed Prandtl number of 0.7. This study considered the wide range of the inner cylinder positions to identify the eccentric effect of the cylinder on flow and thermal structures. The present study classifies the flow structures according to the cylinder position. Finally, the present study provides the map for the flow structures at each Rayleigh number (Ra). The Ra = 103 and 104 form the four modes of the flow structures. These modes are classified by mainly the large circulation and inner vortices. When Ra = 105, one mode that existed at Ra = 103 and 104, disappears in the map of the flow structures. The new three modes appear, resulting in total six modes of flow structures at Ra = 105. New modes at Ra = 105 are characterized by the top side secondary vortices. The corresponding isotherms are presented to explain the bifurcation of the flow structure.
To provide information for 360-degree visual space exploration, we design experiments to measure and analyze object-centric visual preference. After defining the static and dynamic properties of the objects of interest, we collect real-shot 360-degree videos and synthesize computer-generated 360degree videos so that the objects have different combinations of static and dynamic properties. From head movement trajectories of subjects wearing head-mounted displays and watching 360-degree videos, we compare visual preference between objects with different static and dynamic properties. The experimental results indicate that subjects have visual preference for certain static and dynamic properties of objects over others; with this knowledge we can construct visually salient viewports by detecting and comparing static and dynamic properties of objects in a 360-degree video.INDEX TERMS 360-degree video; visual attention; visual preference; static property; dynamic property.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.