Anomaly detection is an important research direction, which takes the real-time information system from different sensors and conditional information sources into consideration. Based on this, we can detect possible anomalies expected of the devices and components. One of the challenges is anomaly detection in multivariate-sensing time-series in this paper. Based on this situation, we propose RADM, a real-time anomaly detection algorithm based on Hierarchical Temporal Memory (HTM) and Bayesian Network (BN). First of all, we use HTM model to evaluate the real-time anomalies of each univariate-sensing time-series. Secondly, a model of anomalous state detection in multivariate-sensing time-series based on Naive Bayesian is designed to analyze the validity of the above time-series. Lastly, considering the real-time monitoring cases of the system states of terminal nodes in Cloud Platform, the effectiveness of the methodology is demonstrated using a simulated example. Extensive simulation results show that using RADM in multivariate-sensing time-series is able to detect more abnormal, and thus can remarkably improve the performance of real-time anomaly detection.
With the development of communication technology and artificial intelligence technology, intelligent vehicle has become a very important part of Internet of Things technology. At present, the single-vehicle intelligence is gradually improved, and more and more unmanned vehicles appear on the road. In the future, these individual intelligence applications need to be transformed into collective intelligence to give full play to the greater advantages of unmanned driving. For example, individual intelligence is self-interest. If there is no collective cooperation, it may affect the whole traffic flow for its own speed. Although the vehicle ad hoc network technology provides a guarantee for the communication between vehicles and makes cooperation between vehicles possible, there are still challenges in how to adapt to coordination learning. Coordination reinforcement learning is one of the most promising methods to solve the multiagent coordination optimization problems. However, existing coordinative learning approaches that usually rely on static topologies cannot be easily adopted to solve the vehicle coordination problems in the dynamic environment. We propose a dynamic coordination reinforcement learning to help vehicles make their driving decisions. First, we apply driving safety field theory to construct the dynamic coordination graph (DCG), representing the dynamic coordination behaviors among vehicles. Second, we design reinforcement learning techniques on our DCG model to implement the joint optimal action reasoning for the multivehicle system and eventually derive the optimal driving policy for each vehicle. Finally, compared with other multiagent learning methods, our method has a significant improvement in security and speed, which is about 1% higher than other multiagent learning methods, but its training speed is also significantly improved about 8%.
With the growing up of Internet of Things technology, the application of Internet of Things has been popularized in the field of intelligent vehicles. Therefore, more artificial intelligence algorithms, especially DRL methods, are more widely used in autonomous driving. A large number of deep reinforcement learning (RL) technologies are continuously applied to the behavior planning module of single-vehicle autonomous driving in early. However, autonomous driving is an environment where multi-intelligent vehicles coexist, interact with each other, and dynamically change. In this environment, multiagent RL technology is one of the most promising technologies for solving the coordination behavior planning problem of multivehicles. However, the research related to this topic is rare. This paper introduces a dynamic coordination graph (CG) convolution technology for the cooperative learning of multi-intelligent vehicles. This method dynamically constructs a CG model among multiple vehicles, effectively reducing the impact of unrelated intelligent vehicles and simplifying the learning process. The relationship between intelligent vehicles is refined using the attention mechanism, and the graph convolution RL technology is used to simulate the message-passing aggregation algorithm to maximize the local utility and obtain the maximum joint utility to guide coordination learning. Driving samples are used as training data, and the model guided by reward shaping is combined with the model of the free graph convolution RL method, which enables our proposed method to achieve high gradualness and improve its learning efficiency. In addition, as the graph convolutional RL algorithm shares parameters between agents, it can easily build scales that are suitable for large-scale multiagent systems, such as traffic environments. Finally, the proposed algorithm is tested and verified for the multivehicle cooperative lane-changing problem in the simulation environment of autonomous driving. Experimental results show that our proposed method has better value function representation in that it can learn better coordination driving policies than traditional dynamic coordination algorithms.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.