In the modern helicopter design and development process, constrained full-state control technology for turbo-shaft engine/rotor systems has always been a research hotspot in academia and industry. However, relevant references have pointed out that the traditional design method with an overly complex structure (Min-Max structure and schedule-based transient controller, i.e., M-M-STC) may not be able to meet the protection requirements of engine control systems under certain circumstances and can be too conservative under other conditions. In order to address the engine limit protection problem more efficiently, a constrained full-state model predictive controller (MPC) has been designed in this paper by incorporating a linear parameter varying (LPV) predictive model. Meanwhile, disturbance extended state observer (D-ESO) (which a sufficient convergence condition is deduced for) has also been proposed as the compensator of the LPV model to alleviate the MPC model mismatch problem. Finally, we run a group of comparison simulations with the traditional M-M-STC method to verify the effectiveness of this controller by taking compressor surge prevention problems as a case study, and the results indicate the validity of the proposed method.
Most reinforcement learning (RL)-based works for mapless point goal navigation tasks assume the availability of the robot ground-truth poses, which is unrealistic for real world applications. In this work, we remove such an assumption and deploy observation-based localisation algorithms, such as Lidar-based or visual odometry, for robot self-pose estimation. These algorithms, despite having widely achieved promising performance and being robust to various harsh environments, may fail to track robot locations under many scenarios, where observations perceived along robot trajectories are insufficient or ambiguous. Hence, using such localisation algorithms will introduce new unstudied problems for mapless navigation tasks. This work will propose a new RL-based algorithm, with which robots learn to navigate in a way that prevents localisation failures or getting trapped in local minimum regions. This ability can be learned by deploying two techniques suggested in this work: a reward metric to decide punishment on behaviours resulting in localisation failures; and a reconfigured state representation that consists of current observation and history trajectory information to transfer the problem from a partially observable Markov decision process (POMDP) to a Markov Decision Process (MDP) model to avoid local minimum.The authors thank the China Scholarship Council (CSC) for financially supporting Feiqiang Lin in his PhD programme (201906020170).
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.