Model predictive control (MPC) has been successful in applications involving the control of complex physical systems. This class of controllers leverages the information provided by an approximate model of the system's dynamics to simulate the effect of control actions. MPC methods also present a few hyperparameters which may require a relatively expensive tuning process by demanding interactions with the physical system. Therefore, we investigate fine-tuning MPC methods in the context of stochastic MPC, which presents extra challenges due to the randomness of the controller's actions. In these scenarios, performance outcomes present noise, which is not homogeneous across the domain of possible hyper-parameter settings, but which varies in an input-dependent way. To address these issues, we propose a Bayesian optimisation framework that accounts for heteroscedastic noise to tune hyper-parameters in control problems. Empirical results on benchmark continuous control tasks and a physical robot support the proposed framework's suitability relative to baselines, which do not take heteroscedasticity into account.
Markov Chain Monte Carlo (MCMC) simulation is a family of stochastic algorithms that are commonly used to approximate probability distributions by generating samples. The aim of this proposal is to deal with the problem of doing that job on a large scale because due to the increasing power computational demands of data being tall or wide, a study that combines statistical and engineering expertise can be made in order to achieve hardware-accelerated MCMC inference. In this work, I attempt to advance the theory and practice of approximate MCMC methods by developing a toolbox of distributed MCMC algorithms, and then a new method for dealing with large scale problems will be proposed, or else a framework for choosing the most appropriate method will be established. Papers like [1] provide a comprehensive review of the existing literature regarding methods to tackle big data problems. My idea is to tackle divide and conquer approaches since they can work distributed in several machines or else Graphics Processing Unit (GPUs), so I cover the theory behind these methods; then, exhaustive experimental tests will help me compare and categorize them according to their limitations in wide and tall data by considering the dataset size n, sample dimension d, and number of samples T to produce.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.