2022
DOI: 10.1016/j.engappai.2022.104842
|View full text |Cite
|
Sign up to set email alerts
|

Early warning of tunnel collapse based on Adam-optimised long short-term memory network and TBM operation parameters

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6

Relationship

0
6

Authors

Journals

citations
Cited by 34 publications
(1 citation statement)
references
References 43 publications
0
1
0
Order By: Relevance
“…The optimal number of hidden neurons in each hidden layer is determined to be 64, with ReLU serving as the activation function. The Adam optimizer, leveraging the benefits of RMSProp and AdaGrad, updates the weight matrices and bias vectors [43]. Given that a larger batch size can improve gradient estimates but may extend training time, a batch size of 20 is chosen for this study.…”
Section: Model Constructionmentioning
confidence: 99%
“…The optimal number of hidden neurons in each hidden layer is determined to be 64, with ReLU serving as the activation function. The Adam optimizer, leveraging the benefits of RMSProp and AdaGrad, updates the weight matrices and bias vectors [43]. Given that a larger batch size can improve gradient estimates but may extend training time, a batch size of 20 is chosen for this study.…”
Section: Model Constructionmentioning
confidence: 99%