2023
DOI: 10.1016/j.ref.2023.01.006
|View full text |Cite
|
Sign up to set email alerts
|

Computational solar energy – Ensemble learning methods for prediction of solar power generation based on meteorological parameters in Eastern India

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
11
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
2
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 38 publications
(11 citation statements)
references
References 40 publications
0
11
0
Order By: Relevance
“…Previous studies, such as the work of Chakraborty et al [ 41 ], have investigated the impact of certain variables, like weather conditions, on solar power predictions. The novelty of our research, however, is twofold: firstly, it comprehensively analyzes how these variables affect solar power predictions when applying ensemble methods, a technique not extensively explored before in this context.…”
Section: Resultsmentioning
confidence: 99%
“…Previous studies, such as the work of Chakraborty et al [ 41 ], have investigated the impact of certain variables, like weather conditions, on solar power predictions. The novelty of our research, however, is twofold: firstly, it comprehensively analyzes how these variables affect solar power predictions when applying ensemble methods, a technique not extensively explored before in this context.…”
Section: Resultsmentioning
confidence: 99%
“…These models were validated against field data from a 10kWp solar PV power plant, providing valuable insights for greenfield solar projects in the in the eastern region of India. Voting and stacking algorithms exhibited superior performance, with 313.07 RMSE, 0.96 R2-score and 314.9 RMSE, 0.96 R2-score respectively [7]. Lastly, hybrid models combining CNN and LSTM, as well as LSTM and Gaussian Process Regression GPR, showed promise in stable power generation forecasting.…”
Section: Introductionmentioning
confidence: 93%
“…Where l is a differentiable convex loss function that measures the difference between the target 𝑦 𝑖 and the prediction 𝑦 ̂𝑖, Ω penalizes the complexity of the XGBoost model, and 𝑥 𝑖 is the input (Chakraborty, Mondal, Barua, & Bhattacharjee, 2023;Chen & Guestrin, 2016).…”
Section: Extreme Gradient Boosting (Xgboost)mentioning
confidence: 99%