SEG Technical Program Expanded Abstracts 2019 2019
DOI: 10.1190/segam2019-3216640.1
|View full text |Cite
|
Sign up to set email alerts
|

Does shallow geological knowledge help neural-networks to predict deep units?

Abstract: Geological interpretation of seismic images is a visual task that can be automated by training neural networks. While neural networks have shown to be effective at various interpretation tasks, a fundamental challenge is the lack of labeled data points in the subsurface. For example, the interpolation and extrapolation of well-based lithology using seismic images relies on a small number of known labels. Besides well-known data augmentation techniques, as well as regularization of the network output, we propos… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2020
2020
2021
2021

Publication Types

Select...
2

Relationship

1
1

Authors

Journals

citations
Cited by 2 publications
(1 citation statement)
references
References 17 publications
0
1
0
Order By: Relevance
“…Rather than standard tensor notation, we employ matrix-vector product descriptions to stay close to PDEconstrained optimization literature. We block-vectorize states and Lagrangian multipliers, while weight/parameter tensors are flattened into block-matrices, see [21,18,4]. To keep notation compact, we focus the ResNet [10] with timestep h. The network state y j at layer j is given by y j = y j−1 − hf (K j y j−1 ).…”
Section: Output-regularized Neural Network Training As Pde-constraine...mentioning
confidence: 99%
“…Rather than standard tensor notation, we employ matrix-vector product descriptions to stay close to PDEconstrained optimization literature. We block-vectorize states and Lagrangian multipliers, while weight/parameter tensors are flattened into block-matrices, see [21,18,4]. To keep notation compact, we focus the ResNet [10] with timestep h. The network state y j at layer j is given by y j = y j−1 − hf (K j y j−1 ).…”
Section: Output-regularized Neural Network Training As Pde-constraine...mentioning
confidence: 99%