Background: There has been an increasing surge of interest on development of advanced Reinforcement Learning (RL) systems as intelligent approaches to learn optimal control policies directly from smart agents' interactions with the environment. Objectives: In a model-free RL method with continuous state-space, typically, the value function of the states needs to be approximated. In this regard, Deep Neural Networks (DNNs) provide an attractive modeling mechanism to approximate the value function using sample transitions. DNN-based solutions, however, suffer from high sensitivity to parameter selection, are prone to overfitting, and are not very sample efficient. A Kalman-based methodology, on the other hand, could be used as an efficient alternative. Such an approach, however, commonly requires a-priori information about the system (such as noise statistics) to perform efficiently. The main objective of this paper is to address this issue. Methods: As a remedy to the aforementioned problems, this paper proposes an innovative Multiple Model Kalman Temporal Difference (MM-KTD) framework, which adapts the parameters of the filter using the observed states and rewards. Moreover, an active learning method is proposed to enhance the sampling efficiency of the system. More specifically, the estimated uncertainty of the value functions are exploited to form the behaviour policy leading to more visits to less certain values, therefore, improving the overall learning sample efficiency. As a result, the proposed MM-KTD framework can learn the optimal policy with significantly reduced number of samples as compared to its DNN-based counterparts. Results: To evaluate performance of the proposed MM-KTD framework, we have performed a comprehensive set of experiments based on three RL benchmarks, namely, Inverted Pendulum; Mountain Car, and; Lunar Lander. Experimental results show superiority of the proposed MM-KTD framework in comparison to its state-of-the-art counterparts.
The paper is motivated by the importance of the Smart Cities (SC) concept for future management of global urbanization and energy consumption. Multi-agent Reinforcement Learning (RL) is an efficient solution to utilize large amount of sensory data provided by the Internet of Things (IoT) infrastructure of the SCs for city-wide decision making and managing demand response. Conventional Model-Free (MF) and Model-Based (MB) RL algorithms, however, use a fixed reward model to learn the value function rendering their application challenging for ever changing SC environments. Successor Representations (SR)-based techniques are attractive alternatives that address this issue by learning the expected discounted future state occupancy, referred to as the SR, and the immediate reward of each state. SR-based approaches are, however, mainly developed for single agent scenarios and have not yet been extended to multi-agent settings. The paper addresses this gap and proposes the Multi-Agent Adaptive Kalman Filtering-based Successor Representation (MAKF-SR) framework. The proposed framework can adapt quickly to the changes in a multi-agent environment faster than the MF methods and with a lower computational cost compared to MB algorithms. The proposed MAKF-SR is evaluated through a comprehensive set of experiments illustrating superior performance compared to its counterparts.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.