Unmanned Air Vehicle (UAV) has the advantages of high autonomy and strong dynamic deployment capabilities. At the same time, with the rapid development of the Internet of Things (IoT) technology, the construction of the IoT based on UAVs can break away from the traditional single-line communication mode of UAVs and control terminals, which makes the UAVs more intelligent and flexible when performing tasks. When using UAVs to perform IoT tasks, it is necessary to track the UAVs’ position and pose at all times. When the position and pose tracking fails, relocalization is required to restore the current position and pose. Therefore, how to perform UAV relocalization accurately by using visual information has attracted much attention. However, the complex changes in light conditions in the real world have brought huge challenges to the visual relocalization of UAV. Traditional visual relocalization algorithms mostly rely on artificially designed low-level geometric features which are sensitive to light conditions. In this paper, oriented to the UAV-based IoT, a UAV visual relocalization method using semantic object features is proposed. Specifically, the method uses YOLOv3 as the object detection framework to extract the semantic information in the picture and uses the semantic information to construct a topological map as a sparse description of the environment. With prior knowledge of the map, the random walk algorithm is used on the association graphs to match the semantic features and the scenes. Finally, the EPnP algorithm is used to solve the position and pose of the UAV which will be returned to the IoT platform. Simulation results show that the method proposed in this paper can achieve robust real-time UAVs relocalization when the scene lighting conditions change dynamically and provide a guarantee for UAVs to perform IoT tasks.
Airspace complexity is a key indicator that reflects the safety of airspace operations in air traffic management systems. Furthermore, to achieve efficient air traffic control, it is necessary to accurately predict the airspace complexity. In this article, we propose a novel spatial-temporal hybrid deep learning model for airspace complexity prediction to efficiently capture spatial correlations as well as temporal dependencies pertaining to the airspace complexity data. Specifically, we apply convolutional networks to discover the short-term temporal patterns and skip long short-term memory networks to model the long-term temporal patterns of airspace complexity data. Furthermore, it is observed that the graph attention network in our proposed model, which emphasizes capturing the spatial correlations of the airspace sectors, can significantly improve the prediction accuracy. Extensive experiments are conducted on the real data of six airspace sectors in Southwest China. Experimental results show that our spatial-temporal deep learning approach is superior to state-of-the-art methods.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.