The aim of multi-agent reinforcement learning systems is to provide interacting agents with the ability to collaboratively learn and adapt to the behavior of other agents. Typically, an agent receives its private observations providing a partial view of the true state of the environment. However, in realistic settings, the harsh environment might cause one or more agents to show arbitrarily faulty or malicious behavior, which may suffice to allow the current coordination mechanisms fail. In this paper, we study a practical scenario of multi-agent reinforcement learning systems considering the security issues in the presence of agents with arbitrarily faulty or malicious behavior. The previous state-of-the-art work that coped with extremely noisy environments was designed on the basis that the noise intensity in the environment was known in advance. However, when the noise intensity changes, the existing method has to adjust the configuration of the model to learn in new environments, which limits the practical applications. To overcome these difficulties, we present an Attention-based Fault-Tolerant (FT-Attn) model, which can select not only correct, but also relevant information for each agent at every time step in noisy environments. The multihead attention mechanism enables the agents to learn effective communication policies through experience concurrent with the action policies. Empirical results showed that FT-Attn beats previous state-of-the-art methods in some extremely noisy environments in both cooperative and competitive scenarios, much closer to the upper-bound performance. Furthermore, FT-Attn maintains a more general fault tolerance ability and does not rely on the prior knowledge about the noise intensity of the environment.
Currently, the heading estimation could be easily achieved by many built-in direction sensors, such as the smartphone. However, the obtained heading angle is only suitable for situations where an equipped pedestrian’s movement and orientation keep the same, such as normal forward walking and turning. When the pedestrian faces a different direction from his movement, the heading angle remains the pedestrian orientation but not the movement and thus causes a heading estimation error. In this paper, we introduce several related deep learning techniques to explore their respective abilities in heading estimation of the waist-mounted Miniature Inertial Measurement Unit (MIMU). Specifically, this paper adopts two kinds of methods to analyze the data collected from MIMU, include acceleration, angular velocity, or their combination, to predict the heading angle. Firstly, considering the heading estimation is a time series prediction problem actually, this paper introduces the powerful Long Short-Time Memory (LSTM) model. On the other hand, this paper uses the Graph Convolutional Network (GCN) model to consider the relationship between the orientation and the direction of motion at different times. In experiments, we show that the accuracy of the proposed LSTM model in the test set achieved a promising 99.12%. To test our method in the real scenes, this paper designs simulation experiments and mobile terminal tests based on Tensorflow Lite. The experimental results show that the movement heading can be effectively judged based on the waist-mounted sensor data, and has a very significant accuracy.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.