2018
DOI: 10.1016/j.energy.2018.09.118
|View full text |Cite
|
Sign up to set email alerts
|

Deep belief network based k-means cluster approach for short-term wind power forecasting

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
57
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
8
1
1

Relationship

0
10

Authors

Journals

citations
Cited by 256 publications
(75 citation statements)
references
References 34 publications
0
57
0
Order By: Relevance
“…The k-means algorithm identifies k centroids and then allocates every data point to the nearest cluster [53]. This algorithm has been used in a study done by Wang et al [55], where the k-means clustering algorithm was used to find the largest historical samples that had the greatest influence on forecasting accuracy to improve the efficiency of the proposed model. A method for clustering was proposed by Deng et al [52], which used a Weibull distribution to establish that an unclustered dataset P can be represented using Equation 9:…”
Section: Wind Clusteringmentioning
confidence: 99%
“…The k-means algorithm identifies k centroids and then allocates every data point to the nearest cluster [53]. This algorithm has been used in a study done by Wang et al [55], where the k-means clustering algorithm was used to find the largest historical samples that had the greatest influence on forecasting accuracy to improve the efficiency of the proposed model. A method for clustering was proposed by Deng et al [52], which used a Weibull distribution to establish that an unclustered dataset P can be represented using Equation 9:…”
Section: Wind Clusteringmentioning
confidence: 99%
“…The silhouette method is a measure of the similarity of an object to its cluster compared to other clusters. It ranges from -1 to 1, where a large value means that the object is highly matched to its own cluster and low matched to the other clusters (Wang et al, 2018). If the majority of objects have high values, then the clustering result is considered successful.…”
Section: ) Silhouette Methodsmentioning
confidence: 99%
“…After this, the data were segregated into training and testing sets. Since there is no rule for data segregation, we used an earlier researcher's approach [63,64] to divide into 80% (training) and 20% (testing) sub-sets, but 10% of the training data were separated again for the purpose of model validation, mainly to eliminate issues related to a model bias through a cross-validation process.…”
Section: Data Preparation Feature Selection and Sensitivity Analysismentioning
confidence: 99%