Direct methods for Visual Odometry (VO) have gained popularity due to their capability to exploit information from all intensity gradients in the image. However, low computational speed as well as missing guarantees for optimality and consistency are limiting factors of direct methods, where established feature-based methods instead succeed at. Based on these considerations, we propose a Semidirect VO (SVO) that uses direct methods to track and triangulate pixels that are characterized by high image gradients but relies on proven feature-based methods for joint optimization of structure and motion. Together with a robust probabilistic depth estimation algorithm, this enables us to efficiently track pixels lying on weak corners and edges in environments with little or high-frequency texture. We further demonstrate that the algorithm can easily be extended to multiple cameras, to track edges, to include motion priors, and to enable the use of very large field of view cameras, such as fisheye and catadioptric ones. Experimental evaluation on benchmark datasets shows that the algorithm is significantly faster than the state of the art while achieving highly competitive accuracy. Abstract-Direct methods for Visual Odometry (VO) have gained popularity due to their capability to exploit information from all intensity gradients in the image. However, low computational speed as well as missing guarantees for optimality and consistency are limiting factors of direct methods, where established feature-based methods instead succeed at. Based on these considerations, we propose a Semi-direct VO (SVO) that uses direct methods to track and triangulate pixels that are characterized by high image gradients but relies on proven feature-based methods for joint optimization of structure and motion. Together with a robust probabilistic depth estimation algorithm, this enables us to efficiently track pixels lying on weak corners and edges in environments with little or high-frequency texture. We further demonstrate that the algorithm can easily be extended to multiple cameras, to track edges, to include motion priors, and to enable the use of very large field of view cameras, such as fisheye and catadioptric ones. Experimental evaluation on benchmark datasets shows that the algorithm is significantly faster than the state of the art while achieving highly competitive accuracy.
Abstract-Transport of objects is a major application in robotics nowadays. While ground robots can carry heavy payloads for long distances, they are limited in rugged terrains. Aerial robots can deliver objects in arbitrary terrains; however they tend to be limited in payload. It has been previously shown that, for heavy payloads, it can be beneficial to carry them using multiple flying robots. In this paper, we propose a novel collaborative transport scheme, in which two quadrotors transport a cable-suspended payload at accelerations that exceed the capabilities of previous collaborative approaches, which make quasi-static assumptions. Furthermore, this is achieved completely without explicit communication between the collaborating robots, making our system robust to communication failures and making consensus on a common reference frame unnecessary. Instead, they only rely on visual and inertial cues obtained from on-board sensors. We implement and validate the proposed method on a real system.
We have developed and proven the viability of a system for massively parallel in-situ sampling of aerial images at the actual wafer plane of a 193nm production scanner, using a wafer-like high-resolution image sensor. The sensor and scanner can be operated under exact production conditions in terms of projection optics, all illumination conditions, laser wavelength and bandwidth, so that the sensor will be sensitive to all effects arising from the interaction of an actual scanner with an actual reticle. We demonstrate the basic image capturing operation of the sensor, using more than 400,000 sampling points across the exposure field, and fundamental capabilities of the system. These include generation of focus maps, line width measurements on the sensor images, sensitivity to sub-resolution features, sensitivity to aberrations, and excellent agreement between experimental data and simulation.
We introduce a novel in-scanner aerial image sampling technique using a sensor wafer that can be loaded into a production scanner to acquire data at the wafer plane, and under exact production conditions in terms of all optical settings and parameters of the actual scanner and an actual reticle. We demonstrate the applicability of this system to CD uniformity characterization of a production scanner in combination with a test reticle. CD estimates can be directly obtained from the image data by applying a fixed threshold, by employing a calibrated resist model to the sensor data, or by using the sensor data to accurately calibrate a complete lithography model. The linear response of the sensor provides complete information on the imaging process, and CD data can be immediately correlated to other image parameters, such as contrast, ILS, peak signal values, etc. We demonstrate the ability of the system to characterize CD variations and through-pitch curves, and to generate CD uniformity maps across the exposure field. We have extensively studied the repeatability and reproducibility of the system, and show its ability to detect changes in imaging performance over time in a production environment, differences between exposure tools, or different mask manufacturing processes.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.