Aerial cinematography is revolutionizing industries that require live and dynamic camera viewpoints such as entertainment, sports, and security. However, safely piloting a drone while filming a moving target in the presence of obstacles is immensely taxing, often requiring multiple expert human operators. Hence, there is a demand for an autonomous cinematographer that can reason about both geometry and scene context in real-time. Existing approaches do not address all aspects of this problem; they either require high-precision motion-capture systems or global positioning system tags to localize targets, rely on prior maps of the environment, plan for short time horizons, or only follow fixed artistic guidelines specified before the flight. In this study, we address the problem in its entirety and propose a complete system for real-time aerial cinematography that for the first time combines: (a) vision-based target estimation; (b) 3D signed-distance mapping for occlusion estimation; (c) efficient trajectory optimization for long time-horizon camera motion; and (d) learning-based artistic shot selection.We extensively evaluate our system both in simulation and in field experiments by filming dynamic targets moving through unstructured environments. Our results indicate that our system can operate reliably in the real world without restrictive assumptions. We also provide in-depth analysis and discussions for each module, with the hope that our design tradeoffs can generalize to other related applications. Videos of the complete system can be found at https://youtu.be/ookhHnqmlaU. K E Y W O R D S aerial robotics, cinematography, computer vision, learning, mapping, motion planning Within the filming context, this cost function measures jerkiness of motion, safety, environmental occlusion of the actor, and shot quality (artistic quality of viewpoints). This cost function depends on the environment and , and on the actor forecast ξ a , all of which are sensed on-the-fly. The changing nature of the environment and ξ a demands replanning at a high frequency. Here we briefly touch upon the four components of the cost function J(ξ q ) (refer to Section 7 for details and mathematical expressions): Smoothness J smooth (ξ q ): Penalizes jerky motions that may lead to camera blur and unstable flight; Safety J obs (ξ q , ): Penalizes proximity to obstacles that are unsafe for the UAV; Occlusion J occ (ξ q , ξ a , ): Penalizes occlusion of the actor by obstacles in the environment; Shot quality J shot (ξ q , ξ a , Ω art ): Penalizes poor viewpoint angles and scales that deviate from the desired artistic guidelines, given by the set of parameters Ω art . In its simplest form, we can express J(ξ q ) as a linear composition of each individual cost, weighted by scalars λ i . The objective is to * = ( ) ()={ } J J J J J J xyz J J J J J J x y z J occ + J obs 99.4 ± 2.2 94.2 ± 7.3 86.9 ± 9.3 J obs 98.8 ± 3.0 87.1 ± 8.5 75.3 ± 11.8 Avg. dist. to ξ shot (m) J occ + J obs 0.4 ± 0.4 6.2 ± 11.2 10.7 ± 13.2 J obs 0.05 ± 0.1 0.3 ± 0.2 0.5 ± 0...
Autonomous aerial cinematography has the potential to enable automatic capture of aesthetically pleasing videos without requiring human intervention, empowering individuals with the capability of high-end film studios. Current approaches either only handle off-line trajectory generation, or offer strategies that reason over short time horizons and simplistic representations for obstacles, which result in jerky movement and low real-life applicability. In this work we develop a method for aerial filming that is able to trade off shot smoothness, occlusion, and cinematography guidelines in a principled manner, even under noisy actor predictions. We present a novel algorithm for real-time covariant gradient descent that we use to efficiently find the desired trajectories by optimizing a set of cost functions. Experimental results show that our approach creates attractive shots, avoiding obstacles and occlusion 65 times over 1.25 hours of flight time, re-planning at 5Hz with a 10s time horizon. We robustly film human actors, cars and bicycles performing different motion among obstacles, using various shot types.
The use of drones for aerial cinematography has revolutionized several applications and industries that require live and dynamic camera viewpoints such as entertainment, sports, and security. However, safely controlling a drone while filming a moving target usually requires multiple expert human operators; hence the need for an autonomous cinematographer. Current approaches have severe real-life limitations such as requiring fully scripted scenes, high-precision motion-capture systems or GPS tags to localize targets, and prior maps of the environment to avoid obstacles and plan for occlusion.In this work, we overcome such limitations and propose a complete system for aerial cinematography that combines: (1) a vision-based algorithm for target localization; (2) a real-time incremental 3D signed-distance map algorithm for occlusion and safety computation; and (3) a real-time camera motion planner that optimizes smoothness, collisions, occlusions and artistic guidelines. We evaluate robustness and real-time performance in series of field experiments and simulations by tracking dynamic targets moving through unknown, unstructured environments. Finally, we verify that despite removing previous limitations, our system achieves state-of-the-art performance.
Aerial filming is constantly gaining importance due to the recent advances in drone technology. It invites many intriguing, unsolved problems at the intersection of aesthetical and scientific challenges. In this work, we propose a deep reinforcement learning agent which supervises motion planning of a filming drone by making desirable shot mode selections based on aesthetical values of video shots. Unlike most of the current state-of-the-art approaches that require explicit guidance by a human expert, our drone learns how to make favorable viewpoint selections by experience. We propose a learning scheme that exploits aesthetical features of retrospective shots in order to extract a desirable policy for better prospective shots. We train our agent in realistic AirSim simulations using both a hand-crafted reward function as well as reward from direct human input. We then deploy the same agent on a real DJI M210 drone in order to test the generalization capability of our approach to real world conditions. To evaluate the success of our approach in the end, we conduct a comprehensive user study in which participants rate the shot quality of our methods. Videos of the system in action can be seen at https://youtu.be/qmVw6mfyEmw.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.