We present a challenging dataset, the TartanAir, for robot navigation task and more. The data is collected in photo-realistic simulation environments in the presence of various light conditions, weather and moving objects. By collecting data in simulation, we are able to obtain multimodal sensor data and precise ground truth labels, including the stereo RGB image, depth image, segmentation, optical flow, camera poses, and LiDAR point cloud. We set up a large number of environments with various styles and scenes, covering challenging viewpoints and diverse motion patterns, which are difficult to achieve by using physical data collection platforms. In order to enable data collection in such large scale, we develop an automatic pipeline, including mapping, trajectory sampling, data processing, and data verification. We evaluate the impact of various factors on visual SLAM algorithms using our data. Results of state-of-the-art algorithms reveal that the visual SLAM problem is far from solved, methods that show good performance on established datasets such as KITTI don't perform well in more difficult scenarios. Although we use the simulation, our goal is to push the limits of Visual SLAM algorithms in the real world by providing a challenging benchmark for testing new methods, as well as large diverse training data for learning-based methods. Our dataset is available at http://theairlab.org/tartanair-dataset.
Aerial cinematography is revolutionizing industries that require live and dynamic camera viewpoints such as entertainment, sports, and security. However, safely piloting a drone while filming a moving target in the presence of obstacles is immensely taxing, often requiring multiple expert human operators. Hence, there is a demand for an autonomous cinematographer that can reason about both geometry and scene context in real-time. Existing approaches do not address all aspects of this problem; they either require high-precision motion-capture systems or global positioning system tags to localize targets, rely on prior maps of the environment, plan for short time horizons, or only follow fixed artistic guidelines specified before the flight. In this study, we address the problem in its entirety and propose a complete system for real-time aerial cinematography that for the first time combines: (a) vision-based target estimation; (b) 3D signed-distance mapping for occlusion estimation; (c) efficient trajectory optimization for long time-horizon camera motion; and (d) learning-based artistic shot selection.We extensively evaluate our system both in simulation and in field experiments by filming dynamic targets moving through unstructured environments. Our results indicate that our system can operate reliably in the real world without restrictive assumptions. We also provide in-depth analysis and discussions for each module, with the hope that our design tradeoffs can generalize to other related applications. Videos of the complete system can be found at https://youtu.be/ookhHnqmlaU. K E Y W O R D S aerial robotics, cinematography, computer vision, learning, mapping, motion planning Within the filming context, this cost function measures jerkiness of motion, safety, environmental occlusion of the actor, and shot quality (artistic quality of viewpoints). This cost function depends on the environment and , and on the actor forecast ξ a , all of which are sensed on-the-fly. The changing nature of the environment and ξ a demands replanning at a high frequency. Here we briefly touch upon the four components of the cost function J(ξ q ) (refer to Section 7 for details and mathematical expressions): Smoothness J smooth (ξ q ): Penalizes jerky motions that may lead to camera blur and unstable flight; Safety J obs (ξ q , ): Penalizes proximity to obstacles that are unsafe for the UAV; Occlusion J occ (ξ q , ξ a , ): Penalizes occlusion of the actor by obstacles in the environment; Shot quality J shot (ξ q , ξ a , Ω art ): Penalizes poor viewpoint angles and scales that deviate from the desired artistic guidelines, given by the set of parameters Ω art . In its simplest form, we can express J(ξ q ) as a linear composition of each individual cost, weighted by scalars λ i . The objective is to * = ( ) ()={ } J J J J J J xyz J J J J J J x y z J occ + J obs 99.4 ± 2.2 94.2 ± 7.3 86.9 ± 9.3 J obs 98.8 ± 3.0 87.1 ± 8.5 75.3 ± 11.8 Avg. dist. to ξ shot (m) J occ + J obs 0.4 ± 0.4 6.2 ± 11.2 10.7 ± 13.2 J obs 0.05 ± 0.1 0.3 ± 0.2 0.5 ± 0...
Autonomous aerial cinematography has the potential to enable automatic capture of aesthetically pleasing videos without requiring human intervention, empowering individuals with the capability of high-end film studios. Current approaches either only handle off-line trajectory generation, or offer strategies that reason over short time horizons and simplistic representations for obstacles, which result in jerky movement and low real-life applicability. In this work we develop a method for aerial filming that is able to trade off shot smoothness, occlusion, and cinematography guidelines in a principled manner, even under noisy actor predictions. We present a novel algorithm for real-time covariant gradient descent that we use to efficiently find the desired trajectories by optimizing a set of cost functions. Experimental results show that our approach creates attractive shots, avoiding obstacles and occlusion 65 times over 1.25 hours of flight time, re-planning at 5Hz with a 10s time horizon. We robustly film human actors, cars and bicycles performing different motion among obstacles, using various shot types.
Arctic clouds can profoundly influence surface radiation and thus surface melt. Over Greenland, these cloud radiative effects (CRE) vary greatly with the diverse topography. To investigate the ability of assorted platforms to reproduce heterogeneous CRE, we evaluate CRE spatial distributions from a satellite product, reanalyses, and a global climate model against estimates from 21 automatic weather stations (AWS). Net CRE estimated from AWS generally decreases with elevation, forming a “warm center” distribution. CRE areal averages from the five large‐scale data sets we analyze are all around 10 W/m2. Modern‐Era Retrospective Analysis for Research and Applications version 2 (MERRA‐2), ERA‐Interim, and Clouds and the Earth's Radiant Energy System (CERES) CRE estimates agree with AWS and reproduce the warm center distribution. However, the National Center for Atmospheric Research Arctic System Reanalysis (ASR) and the Community Earth System Model Large ENSemble Community Project (LENS) show strong warming in the south and northwest, forming a warm L‐shape distribution. Discrepancies are mainly caused by longwave CRE in the accumulation zone. MERRA‐2, ERA‐Interim, and CERES successfully reproduce cloud fraction and its dominant positive influence on longwave CRE in this region. On the other hand, longwave CRE from ASR and LENS correlates strongly with ice water path instead of with cloud fraction or liquid water path. Moreover, ASR overestimates cloud fraction and LENS underestimates liquid water path substantially, both with limited spatial variability. MERRA‐2 best captures the observed interstation changes, captures most of the observed cloud‐radiation physics, and largely reproduces both albedo and cloud properties. The warm center CRE spatial distribution indicates that clouds enhance surface melt in the higher accumulation zone and reduce surface melt in the lower ablation zone.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.