2022
DOI: 10.1021/acsomega.2c02554
|View full text |Cite
|
Sign up to set email alerts
|

LGB-Stack: Stacked Generalization with LightGBM for Highly Accurate Predictions of Polymer Bandgap

Abstract: Recently, the Ramprasad group reported a quantitative structure–property relationship (QSPR) model for predicting the E gap values of 4209 polymers, which yielded a test set R 2 score of 0.90 and a test set root-mean-square error (RMSE) score of 0.44 at a train/test split ratio of 80/20. In this paper, we present a new QSPR model named LGB-Stack , which performs a two-level stacked generalization using the light grad… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 8 publications
(4 citation statements)
references
References 50 publications
0
3
0
Order By: Relevance
“…Several examples of these algorithms are shown in Figure 3. (1) Regression algorithms, such as linear regression and support vector regression, are adept at predicting continuous property values such as the band gaps 30 and power conversion efficiency (PCE) of conjugated polymers. The flexibility and interpretability of these algorithms contribute to their widespread adoption.…”
Section: Conjugated Polymer Representationmentioning
confidence: 99%
“…Several examples of these algorithms are shown in Figure 3. (1) Regression algorithms, such as linear regression and support vector regression, are adept at predicting continuous property values such as the band gaps 30 and power conversion efficiency (PCE) of conjugated polymers. The flexibility and interpretability of these algorithms contribute to their widespread adoption.…”
Section: Conjugated Polymer Representationmentioning
confidence: 99%
“…Then, all these base learners output the original data, and the outputs of these models are stacked in columns to form (m, p)-dimensional new data, with m representing the number of samples and p representing the number of base learners. Finally, the new sample data are given to the second layer model for fitting [23][24][25]. Here is an example of the training process for stacked models.…”
Section: Stacking Modelmentioning
confidence: 99%
“…LightGBM algorithms help advance cutting-edge technology by boosting our understanding of complicated data patterns and decision-making processes across numerous industries [41][42][43][44]. It was created by merging two unique data sampling and classification algorithms (EFB (Exclusive Feature Bundling) and GOSS (Gradientbased One Side Sampling)) [39].…”
Section: Light Gbm Algorithmsmentioning
confidence: 99%