Drones allow exploring dangerous or impassable areas safely from a distant point of view. However, flight control from an egocentric view in narrow or constrained environments can be challenging. Arguably, an exocentric view would afford a better overview and, thus, more intuitive flight control of the drone. Unfortunately, such an exocentric view is unavailable when exploring indoor environments. This paper investigates the potential of drone-augmented human vision, i.e., of exploring the environment and controlling the drone indirectly from an exocentric viewpoint. If used with a see-through display, this approach can simulate X-ray vision to provide a natural view into an otherwise occluded environment. The user's view is synthesized from a three-dimensional reconstruction of the indoor environment using image-based rendering. This user interface is designed to reduce the cognitive load of the drone's flight control. The user can concentrate on the exploration of the inaccessible space, while flight control is largely delegated to the drone's autopilot system. We assess our system with a first experiment showing how drone-augmented human vision supports spatial understanding and improves natural interaction with the drone.
Figure 1: Common labeling as used in many AR browsers (Left) compared to our image-based approach (Right). Position of the labels can be automatically optimized, but also their appearance, including depth cue for the labels' anchor or their leader lines.
a) (b) Figure 1: View Management in 3D space. (a) Label placement has been constrained by 3D poles which originate from the center of the object.To resolve occlusions, we move labels along the pole only. (b) Label placement has been constrained by a set of planes in 3D space. Labels are allowed to move within a plane, which is fixed in 3D space. To avoid constant label motion, the label positions are frozen after creating the layout for a viewpoint. The placement is updated only when the viewing angle to the plane grows larger than a threshold.
ABSTRACTAnnotations of objects in 3D environments are commonly controlled using view management techniques. State-of-the-art view management strategies for external labels operate in 2D image space. This creates problems, because the 2D view of a 3D scene changes over time, and temporal behavior of elements in a 3D scene is not obvious in 2D image space. We propose managing the placement of external labels in 3D object space instead. We use 3D geometric constraints to achieve label placement that fulfills the desired objectives (e.g., avoiding overlapping labels), but also behaves consistently over time as the viewpoint changes. We propose two geometric constraints: a 3D pole constraint, where labels move along a 3D pole sticking out from the annotated object, and a plane constraint, where labels move in a dominant plane in the world. This formulation is compatible with standard optimization approaches for labeling, but overcomes the lack of temporal coherence. *
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.