2019
DOI: 10.1016/j.aei.2019.100986
|View full text |Cite
|
Sign up to set email alerts
|

A framework for brain learning-based control of smart structures

Abstract: In this approach, a deep neural network learns how to improve structural responses using feedback control. The effectiveness of the framework is demonstrated in a case study for a moment frame subjected to earthquake excitations. The performance of the learning method was improved by proposing a state-selector function that prevented the neural network from forgetting key states. Results show that the controller significantly improves structural responses not only to earthquake records on which it was trained … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 12 publications
(3 citation statements)
references
References 38 publications
(28 reference statements)
0
3
0
Order By: Relevance
“…The Hierarchical Temporal Memory (HTM) and LSTM network approaches have been adopted to predict short-term arterial traffic flow [26]. A deep neural network that employs a feedback control concept is adopted to solve an intelligent structural control problem [27]. It makes use of a state-selector function to avoid forgetting key states by the neural-networks and hence improve the overall performance.…”
Section: Introductionmentioning
confidence: 99%
“…The Hierarchical Temporal Memory (HTM) and LSTM network approaches have been adopted to predict short-term arterial traffic flow [26]. A deep neural network that employs a feedback control concept is adopted to solve an intelligent structural control problem [27]. It makes use of a state-selector function to avoid forgetting key states by the neural-networks and hence improve the overall performance.…”
Section: Introductionmentioning
confidence: 99%
“…Recently developed algorithms, including deep deterministic policy gradient (DDPG) [7], trust region policy optimization (TRPO) [8], and proximal policy optimization (PPO) [9], have exhibited exceptional performance in robotics control tasks within the action-state domain. A modified DQN is used for structural control by optimizing a straightforward reward function [10]. Nonetheless, the control signal continues to be discrete, preventing accurate determination.…”
Section: Introductionmentioning
confidence: 99%
“…Deep learning networks have also been proposed to build control rules for systems with complex dynamics. For example, Li et al (Li et al, 2010) employed a dynamic neural network to track nonlinearities in a building, and Rahmani et al (Rahmani et al, 2019) used a deep neural network for seismic control. The main advantage of deep learning algorithms is their capability to implicitly learn complex features (e.g., nonstationary dynamics) and adapt to uncertainties (e.g., system properties and external excitations).…”
Section: Introductionmentioning
confidence: 99%