Abstract3D gaze information is important for scene-centric attention analysis, but accurate estimation and analysis of 3D gaze in real-world environments remains challenging. We present a novel 3D gaze estimation method for monocular head-mounted eye trackers. In contrast to previous work, our method does not aim to infer 3D eyeball poses, but directly maps 2D pupil positions to 3D gaze directions in scene camera coordinate space. We first provide a detailed discussion of the 3D gaze estimation task and summarize different methods, including our own. We then evaluate the performance of different 3D gaze estimation approaches using both simulated and real data. Through experimental validation, we demonstrate the effectiveness of our method in reducing parallax error, and we identify research challenges for the design of 3D calibration procedures.
Abstract-LineSmoothing is the process of curving lines to make them look smoother, better to say it is the representation of a polyline so that fewer points represent the caricature of the line. It also usually reduces the noise in a signal. This can apply to a vector, spline, or a list of points corresponding to a line or signal. Many algorithms are available for automated line smoothing that is commonly seen as a comparatively simple operation; however, instructions for using these algorithms are often complex. In this paper, we present a new method as a basic technique that can efficiently smooth a list of points. We focus on preserving characteristics of the line while avoiding any distortions. Our goal is to demonstrate a flexible method to preserve features of the input based on its characteristics with fewer constants. Since this technique can apply to both vectors and lists of points, it is also useful in map generalization. Selected test examples are illustrated and discussed, followed by an assessment of the models. Finally, results of the proposed method are examined, showing more stable preservation and better noise reduction compared to the available methods reported in the literature.
The authors present a framework for image-based surface appearance editing for light-field data. Their framework improves over the state of the art without the need for a full “inverse rendering,” so that full geometrical data, or presence of highly specular or reflective surfaces are not required. It is robust to noisy or missing data, and handles many types of camera array setup ranging from a dense light field to a wide-baseline stereo-image pair. They start by extracting intrinsic layers from the light-field image set maintaining consistency between views. It is followed by decomposing each layer separately into frequency bands, and applying a wide range of “band-sifting” operations. The above approach enables a rich variety of perceptually plausible surface finishing and materials, achieving novel effects like translucency. Their GPU-based implementation allow interactive editing of an arbitrary light-field view, which can then be consistently propagated to the rest of the views. The authors provide extensive evaluation of our framework on various datasets and against state-of-the-art solutions.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.