2022 8th Iranian Conference on Signal Processing and Intelligent Systems (ICSPIS) 2022
DOI: 10.1109/icspis56952.2022.10043978
|View full text |Cite
|
Sign up to set email alerts
|

Deep Multi-scale Dilated Convolution Neural Network with Attention Mechanism: A Novel Method for Earthquake Magnitude Classification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(2 citation statements)
references
References 24 publications
0
2
0
Order By: Relevance
“…The optimal values of µ = 1 and γ = 0.05 are determined through experimental investigations. In the training phase, the network's parameters are initialized using the Xavier initializer [52], and the Adam optimizer [53] is employed for the training process. The datasets are partitioned into training, validation, and test sets, with proportions of 80%, 10%, and 10%, respectively.…”
Section: Implementation Detailsmentioning
confidence: 99%
“…The optimal values of µ = 1 and γ = 0.05 are determined through experimental investigations. In the training phase, the network's parameters are initialized using the Xavier initializer [52], and the Adam optimizer [53] is employed for the training process. The datasets are partitioned into training, validation, and test sets, with proportions of 80%, 10%, and 10%, respectively.…”
Section: Implementation Detailsmentioning
confidence: 99%
“…DL algorithm has complex structure consists of input layer with their own feature extraction layer, multiple hidden layers constructed from dense layer, and connected neurons which has high generalization power [12]. This structure has increased the learning capability of the model compared to superficial neural network models [13]. In term of earthquakes prediction, Berhich et al [14] proposed the implementation of improved RNN (recurrent neural network) approaches which are long short-term memory (LSTM), gate recurrent network (GRU) and the fusion LSTM-GRU.…”
Section: Introductionmentioning
confidence: 99%