2023
DOI: 10.1090/conm/784/15753
|View full text |Cite
|
Sign up to set email alerts
|

Sampling type method combined with deep learning for inverse scattering with one incident wave

Abstract: We consider the inverse problem of determining the geometry of penetrable objects from scattering data generated by one incident wave at a fixed frequency. We first study an orthogonality sampling type method which is fast, simple to implement, and robust against noise in the data. This sampling method has a new imaging functional that is applicable to data measured in near field or far field regions. The resolution analysis of the imaging functional is analyzed where the explicit decay rate of the functional … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 26 publications
0
3
0
Order By: Relevance
“…We summarize the DSM-DL scheme in algorithm 1. The explicit loss function will be given in equation (35) in the numerical experiments. Specifically, if we discretize the index functions and the true contrasts on N d × N d grid points in Ω for 2D problems, then the input to the neural network is a tensor with the dimension N i × N d × N d , while the output of the neural network is a tensor with the dimension 1 × N d × N d , i.e.…”
Section: Dsm-dl Approachmentioning
confidence: 99%
See 2 more Smart Citations
“…We summarize the DSM-DL scheme in algorithm 1. The explicit loss function will be given in equation (35) in the numerical experiments. Specifically, if we discretize the index functions and the true contrasts on N d × N d grid points in Ω for 2D problems, then the input to the neural network is a tensor with the dimension N i × N d × N d , while the output of the neural network is a tensor with the dimension 1 × N d × N d , i.e.…”
Section: Dsm-dl Approachmentioning
confidence: 99%
“…The batch size is taken as 6 and we use a total of 30 epochs to train the neural networks. As we use the loss function (35), we actually have 6000 training data and the batch size is 12. The learning rate starts at 0.001 and decreases by a factor of 0.5 every 3 epochs.…”
Section: Circle Dataset Examplementioning
confidence: 99%
See 1 more Smart Citation