Low level helicopter operations in Degraded Visual Environment (DVE) still are a major challenge and bear the risk of potentially fatal accidents. DVE generally encompasses all degradations to the visual perception of the pilot ranging from night conditions via rain and snowfall to fog and maybe even blinding sunlight or unstructured outside scenery. Each of these conditions reduce the pilots' ability to perceive visual cues in the outside world reducing his performance and finally increasing risk of mission failure and accidents, like for example Controlled Flight Into Terrain (CFIT). The basis for the presented solution is a fusion of processed and classified high resolution ladar data with database information having a potential to also include other sensor data like forward looking or 360° radar data. This paper reports on a pilot assistance system aiming at giving back the essential visual cues to the pilot by means of displaying 3D conformal cues and symbols in a head-tracked Helmet Mounted Display (HMD) and a combination with synthetic view on a head-down Multi-Function Display (MFD). Each flight phase and each flight envelope requires different symbology sets and different possibilities for the pilots to select specific support functions. Several functionalities have been implemented and tested in a simulator as well as in flight. The symbology ranges from obstacle warning symbology via terrain enhancements through grids or ridge lines to different waypoint symbols supporting navigation. While some adaptations can be automated it emerged as essential that symbology characteristics and completeness can be selected by the pilot to match the relevant flight envelope and outside visual conditions.
Figure 1: Our method creates abstract stylized objects from a given input model (left). We analyze the shape and its geometry to guide the stylization and abstraction of the object. Essentially, the user makes a selection from a prioritized list of style operands and applies it on the object. The stylized versions of the input can be rendered in various ways using non-photorealistic rendering.
In this paper we propose a dynamic DBSCAN-based method to cluster and visualize unclassified and potential dangerous obstacles in data sets recorded by a LiDAR sensor. The sensor delivers data sets in a short time interval, so a spatial superposition of multiple data sets is created. We use this superposition to create clusters incrementally. Knowledge about the position and size of each cluster is used to fuse clusters and the stabilization of clusters within multiple time frames. Cluster stability is a key feature to provide a smooth and un-distracting visualization for the pilot. Only a few lines are indicating the position of threatening unclassified points, where a hazardous situation for the helicopter could happen, if it comes too close. Clustering and visualization form a part of an entire synthetic vision processing chain, in which the LiDAR points support the generation of a real-time synthetic view of the environment.
Helicopter pilots often have to deal with bad weather conditions and degraded views. Such situations may decrease the pilots' situational awareness significantly. The worst-case scenario would be a complete loss of visual reference during an off-field landing due to brownout or white out. In order to increase the pilots' situational awareness, helicopters nowadays are equipped with different sensors that are used to gather information about the terrain ahead of the helicopter. Synthetic vision systems are used to capture and classify sensor data and to visualize them on multifunctional displays or pilot's head up displays. This requires the input data to be a reliably classified into obstacles and ground.In this paper, we present a regularization-based terrain classifier. Regularization is a popular segmentation method in computer vision and used in active contours. For a real-time application scenario with LIDAR data, we developed an optimization that uses different levels of detail depending on the accuracy of the sensor. After a preprocessing step where points are removed that cannot be ground, the method fits a shape underneath the recorded point cloud. Once this shape is calculated, the points below this shape can be distinguished from elevated objects and are classified as ground. Finally, we demonstrate the quality of our segmentation approach by its application on operational flight recordings. This method builds a part of an entire synthetic vision processing chain, where the classified points are used to support the generation of a real-time synthetic view of the terrain as an assistance tool for the helicopter pilot.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.