2018
DOI: 10.1190/geo2017-0615.1
|View full text |Cite
|
Sign up to set email alerts
|

A data-driven amplitude variation with offset inversion method via learned dictionaries and sparse representation

Abstract: Amplitude variation with offset (AVO) inversion is a typical ill-posed inverse problem. To obtain a stable and unique solution, regularization techniques relying on mathematical models from prior information are commonly used in conventional AVO inversion methods (hence the name model-driven methods). Due to the difference between prior information and the actual geology, these methods often have difficulty achieving satisfactory accuracy and resolution. We have developed a novel data-driven inversion method f… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
5
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
1
1

Relationship

1
6

Authors

Journals

citations
Cited by 39 publications
(5 citation statements)
references
References 49 publications
0
5
0
Order By: Relevance
“…Within a work area, the physical properties of the subsurface often have a certain similarity and lateral continuity, and the elastic parameters located at different locations have some shared features. Based on this, we assume that each elastic parameter in the same area shares a common sparse representation basis (called sparse dictionary), which could be obtained from the well‐logging data (regarded as the training set) by sparse representation (She et al ., 2018). The learned dictionary d is a matrix of size La×Na${{{\bf L}}_{\bm{a}}} \times {{{\bf N}}_{\bm{a}}}$, in which each column vector of length L a is a prototype feature of the sample data (hence the name atom), N a represents the number of atoms of d .…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…Within a work area, the physical properties of the subsurface often have a certain similarity and lateral continuity, and the elastic parameters located at different locations have some shared features. Based on this, we assume that each elastic parameter in the same area shares a common sparse representation basis (called sparse dictionary), which could be obtained from the well‐logging data (regarded as the training set) by sparse representation (She et al ., 2018). The learned dictionary d is a matrix of size La×Na${{{\bf L}}_{\bm{a}}} \times {{{\bf N}}_{\bm{a}}}$, in which each column vector of length L a is a prototype feature of the sample data (hence the name atom), N a represents the number of atoms of d .…”
Section: Methodsmentioning
confidence: 99%
“…This shows that with the increase of L a , the learned dictionary atoms contain more global information, and the dictionary is more conducive to storing the prior information derived from training samples. There is one more point to note, to keep the redundant nature of the dictionary, L a should be much smaller than N a (She et al ., 2018). At the same time, from Figure 4(b) we can see that the elapsed time increases with the increase of L a because the K‐SVD for dictionary learning takes more time for high‐dimensional samples.…”
Section: Applicationsmentioning
confidence: 99%
See 1 more Smart Citation
“…SR of signals has always been a research hotspot in the field of signal processing. In the field of seismic signal processing, SR methods are commonly used for seismic data denoising, seismic data reconstruction and other problems (Beckouche & Ma, 2014;She et al, 2018;Shao et al, 2019). The set of atoms used in SR is called a dictionary.…”
Section: Sparse Representationmentioning
confidence: 99%
“…Considering the advantages of small dictionary size and invariance of shifting [12], tensor sparse coding is the key point we want to apply in our model. We make the assumption [24] that the inner patterns of images can be at least approximately sparsely represented with a learned dictionary. For tensor dictionary representation, T = D * C, where T ∈ R d×N ×n , D ∈ R d×m×n , C ∈ R m×N ×n .…”
Section: Tensor Representation In Tganmentioning
confidence: 99%