2023
DOI: 10.1016/j.fuel.2022.126297
|View full text |Cite
|
Sign up to set email alerts
|

Variable selection and data fusion for diesel cetane number prediction

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(1 citation statement)
references
References 45 publications
0
1
0
Order By: Relevance
“…For improving the performance of the calibration model, spectral pretreatment and variable selection strategy need to be implied in the modeling process. Common spectral pretreatment methods for LIBS mainly consist of baseline correction, noise filtering, overlapping, peak resolution, and data compression, such as multivariate scattering correction (MSC), standard normal variation (SNV), Savitzky-Golay (SG) convolution derivation, wavelet transform (WT), first-order derivation (D1st), second-order derivation (D2nd), etc. Similarly, feature selection refers to selecting the optimal feature from the original feature subset based on an evaluation criterion, which is an essential approach to improve the performance of machine learning algorithms . Variable importance measurement (VIM), variable importance projection (VIP), mutual information (MI), successive projections algorithm (SPA), genetic algorithm (GA), etc.…”
Section: Introductionmentioning
confidence: 99%
“…For improving the performance of the calibration model, spectral pretreatment and variable selection strategy need to be implied in the modeling process. Common spectral pretreatment methods for LIBS mainly consist of baseline correction, noise filtering, overlapping, peak resolution, and data compression, such as multivariate scattering correction (MSC), standard normal variation (SNV), Savitzky-Golay (SG) convolution derivation, wavelet transform (WT), first-order derivation (D1st), second-order derivation (D2nd), etc. Similarly, feature selection refers to selecting the optimal feature from the original feature subset based on an evaluation criterion, which is an essential approach to improve the performance of machine learning algorithms . Variable importance measurement (VIM), variable importance projection (VIP), mutual information (MI), successive projections algorithm (SPA), genetic algorithm (GA), etc.…”
Section: Introductionmentioning
confidence: 99%