Background Asthma is a frequently occurring respiratory disease with an increasing incidence around the world. Airway inflammation and remodeling are important contributors to the occurrence of asthma. We conducted this study aiming at exploring the effect of Histone deacetylase 4 (HDAC4)-mediated Kruppel-like factor 5 (KLF5)/Slug/CXC chemokine ligand-12 (CXCL12) axis on the development of asthma in regulation of airway inflammation and remodeling. Methods An asthmatic rat model was induced by ovalbumin (OVA) irrigation, and determined HDAC4, KLF5, Slug, and CXCL12 expression in the lung tissues by RT-qPCR and Western blot assay. OVA was also used to induce a cell model of asthma in human BEAS-2B and HBE135-E6E7bronchial epithelial cells. The airway hyperresponsiveness (AHR), and expression of inflammatory cytokines in model mice were examined using methacholine challenge test and ELISA. The biological behaviors were measured in asthma model bronchial smooth muscle cells (BSMCs) following loss- and gain- function approaches. The interactions between HDAC4, KLF5, Slug, and CXCL12 were also detected by IP assay, dual luciferase gene reporter assay, and ChIP. Results HDAC4 was upregulated in lung tissues of OVA-induced asthmatic mice, and inhibition of HDAC4 alleviated the airway inflammation and remodeling. HDAC4 increased KLF5 transcriptional activity through deacetylation; deacetylated KLF5 bound to the promoter of Slug and transcriptionally upregulated Slug expression, which in turn increased the expression of CXCL12 to promote the inflammation in bronchial epithelial cells and thus induce the proliferation and migration of BSMCs. Conclusion Collectively, HDAC4 deacetylates KLF5 to upregulate Slug and CXCL12, thereby causing airway remodeling and facilitating progression of asthma.
When a mobile robot inspects tasks with complex requirements indoors, the traditional backstepping method cannot guarantee the accuracy of the trajectory, leading to problems such as the instrument not being inside the image and focus failure when the robot grabs the image with high zoom. In order to solve this problem, this paper proposes an adaptive backstepping method based on double Q-learning for tracking and controlling the trajectory of mobile robots. We design the incremental model-free algorithm of Double-Q learning, which can quickly learn to rectify the trajectory tracking controller gain online. For the controller gain rectification problem in non-uniform state space exploration, we propose an incremental active learning exploration algorithm that incorporates memory playback as well as experience playback mechanisms to achieve online fast learning and controller gain rectification for agents. To verify the feasibility of the algorithm, we perform algorithm verification on different types of trajectories in Gazebo and physical platforms. The results show that the adaptive trajectory tracking control algorithm can be used to rectify the mobile robot trajectory tracking controller’s gain. Compared with the Backstepping-Fractional-Older PID controller and Fuzzy-Backstepping controller, Double Q-backstepping has better robustness, generalization, real-time, and stronger anti-disturbance capability.
The unmanned aerial vehicle (UAV) trajectory tracking control algorithm based on deep reinforcement learning is generally inefficient for training in an unknown environment, and the convergence is unstable. Aiming at this situation, a Markov decision process (MDP) model for UAV trajectory tracking is established, and a state-compensated deep deterministic policy gradient (CDDPG) algorithm is proposed. An additional neural network (C-Net) whose input is compensation state and output is compensation action is added to the network model of a deep deterministic policy gradient (DDPG) algorithm to assist in network exploration training. It combined the action output of the DDPG network with compensated output of the C-Net as the output action to interact with the environment, enabling the UAV to rapidly track dynamic targets in the most accurate continuous and smooth way possible. In addition, random noise is added on the basis of the generated behavior to realize a certain range of exploration and make the action value estimation more accurate. The OpenAI Gym tool is used to verify the proposed method, and the simulation results show that: (1) The proposed method can significantly improve the training efficiency by adding a compensation network and effectively improve the accuracy and convergence stability; (2) Under the same computer configuration, the computational cost of the proposed algorithm is basically the same as that of the QAC algorithm (Actor-critic algorithm based on behavioral value Q) and the DDPG algorithm; (3) During the training process, with the same tracking accuracy, the learning efficiency is about 70% higher than that of QAC and DDPG; (4) During the simulation tracking experiment, under the same training time, the tracking error of the proposed method after stabilization is about 50% lower than that of QAC and DDPG.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.