2020 IEEE Congress on Evolutionary Computation (CEC) 2020
DOI: 10.1109/cec48606.2020.9185683
|View full text |Cite
|
Sign up to set email alerts
|

Explaining Symbolic Regression Predictions

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
10
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 20 publications
(10 citation statements)
references
References 13 publications
0
10
0
Order By: Relevance
“…Prior ML work in this area has attained similar performances, though it lacks the explainability of symbolic regression (Filho, Lacerda, & Pappa, 2020; Khalid, Tuszynski, Szlek, Jachowicz, & Mendyk, 2015). Utilizing a combination of random forest, the least absolute shrinkage and selection operator (LASSO) and logistic regression, Xu et al (2017) obtained a sensitivity of 83.3% and a specificity of 90.5% on their validation set.…”
Section: Discussionmentioning
confidence: 99%
“…Prior ML work in this area has attained similar performances, though it lacks the explainability of symbolic regression (Filho, Lacerda, & Pappa, 2020; Khalid, Tuszynski, Szlek, Jachowicz, & Mendyk, 2015). Utilizing a combination of random forest, the least absolute shrinkage and selection operator (LASSO) and logistic regression, Xu et al (2017) obtained a sensitivity of 83.3% and a specificity of 90.5% on their validation set.…”
Section: Discussionmentioning
confidence: 99%
“…Based on the expanded dataset, we employ two symbolic regression methods, namely brute-force linear regression and genetic programming. 94,95 To further reduce difficulties, we decompose the rates for nucleation and growth as:…”
Section: Development Of the Negen1 Modelmentioning
confidence: 99%
“…This allows us to fit the sub-datasets separately with fewer parameters corresponding to those in the individual terms. Brute-force linear regression was performed based on a function database of ~1 million, while genetic programming 94,95 was used multiple times and the best term was kept (having simultaneously good accuracy and simplicity). The complete form of the rate expression can be obtained by:…”
Section: Development Of the Negen1 Modelmentioning
confidence: 99%
“…To address this issue, we can use an even simpler model to explain the complex GP model. For example, Filho et al [250] proposed to use linear model to approximate the local predictions of a complex GP model around a given datapoint for symbolic regression. The local dataset contains a number of nearest neighbours from the training set (rather than randomly sampling).…”
Section: B Post-hoc Local Interpretability By Gpmentioning
confidence: 99%