2017 Intelligent Systems and Computer Vision (ISCV) 2017
DOI: 10.1109/isacv.2017.8054977
|View full text |Cite
|
Sign up to set email alerts
|

Bayesian regularized artificial neural network for fault detection and isolation in wind turbine

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(2 citation statements)
references
References 9 publications
0
2
0
Order By: Relevance
“…In this work, we investigated the ability of a neural network trained using the Bayesian regularization technique to forecast PV power, since this method has not seen many applications in the field of solar energy prediction. The Bayesian technique has a variety of practical benefits, including the ability to solve the over-fitting problem which occurs in conventional neural networks [18].…”
Section: Bayesian Regularized Neural Networkmentioning
confidence: 99%
“…In this work, we investigated the ability of a neural network trained using the Bayesian regularization technique to forecast PV power, since this method has not seen many applications in the field of solar energy prediction. The Bayesian technique has a variety of practical benefits, including the ability to solve the over-fitting problem which occurs in conventional neural networks [18].…”
Section: Bayesian Regularized Neural Networkmentioning
confidence: 99%
“…Moreover, most of the datasets are considered high-dimensional datasets, as they have a high number of attributes/dimensions. The obtained results are compared with nine other algorithms such as Bayesian regularization, 19 BFGS quasi-Newton, 20 resilient backpropagation, 21 scaled conjugate gradient, 22 Fletcher-Powell conjugate gradient, 23 Polak-Ribire conjugate gradient, 23 one-step secant, 23 gradient descent with momentum, 22 and gradient descent 24 in terms of convergence speed, MSE, R, and CPU running time. To make the comparisons fair and minimize the random effects, the experiments are repeated 10 times/ runs.…”
Section: Comparative Algorithmsmentioning
confidence: 99%