2007
DOI: 10.1007/s00024-007-0257-9
|View full text |Cite
|
Sign up to set email alerts
|

Using Artificial Neural Networks to Predict the Presence of Overpressured Zones in the Anadarko Basin, Oklahoma

Abstract: Many sedimentary basins throughout the world exhibit areas with abnormal pore-fluid pressures (higher or lower than normal or hydrostatic pressure). Predicting pore pressure and other parameters (depth, extension, magnitude, etc.) in such areas are challenging tasks. The compressional acoustic (sonic) log (DT) is often used as a predictor because it responds to changes in porosity or compaction produced by abnormal pore-fluid pressures. Unfortunately, the sonic log is not commonly recorded in most oil and/or … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2010
2010
2024
2024

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 17 publications
(5 citation statements)
references
References 24 publications
0
5
0
Order By: Relevance
“…In BP algorithms, the learning rate, ε, is a small number (0.1 < ε < 1.0) (Aristodemou et al 2005) that controls the amount of error that will be negatively added to the interconnection weights for the next iteration (Cranganu 2007). If the learning rate is large, then large changes are allowed in the weight changes and no learning occurs.…”
Section: Setting the Learning Rate And Momentummentioning
confidence: 99%
“…In BP algorithms, the learning rate, ε, is a small number (0.1 < ε < 1.0) (Aristodemou et al 2005) that controls the amount of error that will be negatively added to the interconnection weights for the next iteration (Cranganu 2007). If the learning rate is large, then large changes are allowed in the weight changes and no learning occurs.…”
Section: Setting the Learning Rate And Momentummentioning
confidence: 99%
“…In BP algorithms, the learning rate, H, is a small number (0.1 < H < 1.0) (Aristodemou et al, 2005) that controls the amount of error that will be negatively added to the interconnection weights for the next iteration (Cranganu, 2007). If the learning rate is large, then large changes are allowed in the weight changes, and no learning occurs.…”
Section: S E T T I N G T H E L E a R N I N G R A T E A N D M O M mentioning
confidence: 99%
“…Spichak et al (2002) applied an ANN to invert scalar-controlled source AMT data collected in a northern part of the Minou fault area. Cranganu (2007) used ANNs to predict the presence of over-pressured zones. Spichak (2007) used an ANN to reconstruct the macro-parameters of 3-D geoelectric structures.…”
Section: Introductionmentioning
confidence: 99%
“…Since artificial neural networks mimic the human brain's problem-solving processes, they can use knowledge gained from past experiences and apply that knowledge to new problems and situations (Cranganu, 2007). Training algorithm, embodied in the architecture of ANNs, modifies the connection weights until desired convergence between real outputs and a network's outputs is reached.…”
Section: Background Knowledge: Artificial Neural Networkmentioning
confidence: 99%