2007 International Conference on Wavelet Analysis and Pattern Recognition 2007
DOI: 10.1109/icwapr.2007.4421752
|View full text |Cite
|
Sign up to set email alerts
|

Combining SVD With wavelet transform in synthetic seismic signal denoising

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2011
2011
2014
2014

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(4 citation statements)
references
References 3 publications
0
4
0
Order By: Relevance
“…RBF neural network to predict spent much less time than the linear adaptive network, linear adaptive neural network has great changes with the different error requirement, and the RBF network computing time is essentially the same or little change in the different error accuracy. When there are high requirements in error, the computing time of linear adaptive neural network prediction model is about 10.5ms; In the same calculation accuracy the maximum computation time of RBF neural network is 0.055ms, relative to the low-pass filter lagging 0.68s in article [6,7], the RBF neural network prediction calculation time can be ignored, satisfying the real-time control requirement.…”
Section: Discussionmentioning
confidence: 99%
See 3 more Smart Citations
“…RBF neural network to predict spent much less time than the linear adaptive network, linear adaptive neural network has great changes with the different error requirement, and the RBF network computing time is essentially the same or little change in the different error accuracy. When there are high requirements in error, the computing time of linear adaptive neural network prediction model is about 10.5ms; In the same calculation accuracy the maximum computation time of RBF neural network is 0.055ms, relative to the low-pass filter lagging 0.68s in article [6,7], the RBF neural network prediction calculation time can be ignored, satisfying the real-time control requirement.…”
Section: Discussionmentioning
confidence: 99%
“…, where  is learning rate. Widrow-Hoff learning rule could only train a single layer linear neural network; for multi-layer linear network, by using the superposition principle, we can design a single-layer linear neural network, owning the considerable performance [7] . Assuming that the center vector of RBF network hidden layer basis function and the normalizing parameters have been determined by offline.…”
Section: Prediction Algorithmsmentioning
confidence: 99%
See 2 more Smart Citations