1990
DOI: 10.1029/gl017i007p00977
|View full text |Cite
|
Sign up to set email alerts
|

An experiment in the use of trained neural networks for regional seismic event classification

Abstract: A neural network employing the back propagation learning paradigm has been developed as an experiment in the automatic classification of small regional earthquakes and quarry explosions. The network has been used in the analysis of 66 events recorded by the NORESS array in southern Norway. The input vector consists of three broadband discriminants including the spectral ratios of Sn/Pn and Lg/Pn waves, and the mean cepstral variance of Pn, Sn, and Lg. Two hidden layers are used, consisting of 8 and 2 units. Th… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
11
0

Year Published

1996
1996
2024
2024

Publication Types

Select...
8

Relationship

0
8

Authors

Journals

citations
Cited by 22 publications
(11 citation statements)
references
References 2 publications
0
11
0
Order By: Relevance
“…However, there is no firm convention or theoretical limit for optimizing the number of hidden layers or number of neurons in each hidden layer (Benaouda et al 1999). Some empirical rules, however, have been discussed in literature (Lippman 1987; Pulli & Dysart 1990) that provides some criteria for choosing the number of hidden layers and neurons as a function of the input and output neurons. In the present model, the input layer consists of three nodes and accepts input values from density (g/cc), neutron porosity (per cent) and gamma ray intensity (A.P.I.)…”
Section: Model Initiation and Implementationmentioning
confidence: 99%
“…However, there is no firm convention or theoretical limit for optimizing the number of hidden layers or number of neurons in each hidden layer (Benaouda et al 1999). Some empirical rules, however, have been discussed in literature (Lippman 1987; Pulli & Dysart 1990) that provides some criteria for choosing the number of hidden layers and neurons as a function of the input and output neurons. In the present model, the input layer consists of three nodes and accepts input values from density (g/cc), neutron porosity (per cent) and gamma ray intensity (A.P.I.)…”
Section: Model Initiation and Implementationmentioning
confidence: 99%
“…To assess the combined information carried by two independent unconditional dr's, two measures are often used, the entropy [Rietsch, 1977] 1,1,1,1,1,1,1,1,1 …”
Section: Forming the Solutionmentioning
confidence: 99%
“…The dependency between model m and the noiseless data s is formed by using Bayes theorem (which applies to conditional df's). To assess the combined information carried by two independent unconditional dr's, two measures are often used, the entropy [Rietsch, 1977] and the conjunction [Tarantola and Valette, 1982;Tarantola, 1987]. In a geophysical context the latter is normally used.…”
Section: Forming the Solutionmentioning
confidence: 99%
“…However, it is comparatively difficult to explicitly determine a most optimal and uniform network structure [20]. First, there is still no reliable theoretical limit or convention for optimizing the number of hidden layers or number of neurons in each hidden layer [21], though some empirical rules have been discussed in literatures [22,23]. Moreover, the common criteria to cease training are fuzzy, meaning that it is impossible to provide a concrete and quantitative procedure as to when to stop training.…”
Section: Instability On Neural Network-based Sensitivity Analysismentioning
confidence: 99%