Networked operation of unmanned air vehicles (UAVs) demands fusion of information from disparate sources for accurate flight control. In this investigation, a novel sensor fusion architecture for detecting aircraft runway and horizons as well as enhancing the awareness of surrounding terrain is introduced based on fusion of enhanced vision system (EVS) and synthetic vision system (SVS) images. EVS and SVS image fusion has yet to be implemented in real-world situations due to signal misalignment. We address this through a registration step to align EVS and SVS images. Four fusion rules combining discrete wavelet transform (DWT) sub-bands are formulated, implemented, and evaluated. The resulting procedure is tested on real EVS-SVS image pairs and pairs containing simulated turbulence. Evaluations reveal that runways and horizons can be detected accurately even in poor visibility. Furthermore, it is demonstrated that different aspects of EVS and SVS images can be emphasized by using different DWT fusion rules. The procedure is autonomous throughout landing, irrespective of weather. The fusion architecture developed in this study holds promise for incorporation into manned heads-up displays (HUDs) and UAV remote displays to assist pilots landing aircraft in poor lighting and varying weather. The algorithm also provides a basis for rule selection in other signal fusion applications.
UAV network operation enables gathering and fusion from disparate information 10 sources for flight control in both manned and unmanned platforms. In this investigation, a novel 11 procedure for detecting runways and horizons as well as enhancing surrounding terrain is 12 introduced based on fusion of enhanced vision system (EVS) and synthetic vision system (SVS) 13 images. EVS and SVS image fusion has yet to be implemented real-world situations due to signal 14 misalignment. We address this through a registration step to align the EVS and SVS images. 15Four fusion rules combining discrete wavelet transform (DWT) sub-bands are formulated, 16implemented and evaluated. The resulting procedure is tested on real EVS-SVS image pairs and 17 pairs containing simulated turbulence. Evaluations reveal that runways and horizons can be 18 detected accurately even in poor visibility. Furthermore, it is demonstrated that different aspects 19 of the EVS and SVS images can be emphasized by using different DWT fusion rules. The 20 procedure is autonomous throughout landing, irrespective of weather. We believe the fusion 21 architecture developed holds promise for incorporation into head-up displays (HUDs) and UAV 22 remote displays to assist pilots landing aircraft in poor lighting and varying weather. The 23 algorithm also provided a basis rule selection in other signal fusion applications. 24 26 27 29 accidents are reported to occur during the final approach and landing stages [1]. While 30 instrumented landing systems have successfully been implemented to provide precise landing 31 guidance, they are not available at all airports. Furthermore, smaller aircraft and fixed wing 32 unmanned air vehicles (UAVs) often land in remote locations with only small runway strips 33 available. Thus, there is a clear need to assist pilots and remote operators using visual flight 34 landing aids to detect runways accurately in varying weather conditions. Readily available 35 imaging systems offer obvious potential to address this issue, but a single mode of image capture 36 often does not fully convey all vital landing information in time critical situations. Fusion of ground 37 sensor arrays (e.g. infrared cameras [2]) have been proposed to provide real-time input to UAVs, in 38 particular in the lack of GPS information [3]. Such systems can be enhanced with through the 39 fusion of image information from disparate heterogeneous sensors in real-time. 40The convolution of output acquired from multiple sensors capturing complementary 41 information has received significant attention in recent years. Techniques for fusing such 42 information from images have been applied to a diverse range of fields, including: medical imaging 43 [4, 5], remote sensing [6,7], intelligent transport [8,9], surveillance [10], low altitude remote sensing 44 [11], and color visibility enhancement [12]. Very recent approaches [13] have introduced weighted 45 fusion strategies aimed at improving the robustness of detection robustness against the size of 46 object. Su...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.