2020
DOI: 10.1007/978-3-030-48077-6_8
|View full text |Cite
|
Sign up to set email alerts
|

Analysing ProB’s Constraint Solving Backends

Abstract: We evaluate the strengths and weaknesses of different backends of the ProB constraint solver. For this, we train a random forest over a database of constraints to classify whether a backend is able to find a solution within a given amount of time or answers unknown. The forest is then analysed in regards of feature importances to determine subsets of the B language in which the respective backends excel or lack for performance. The results are compared to our initial assumptions over each backend's performance… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
8
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(9 citation statements)
references
References 33 publications
0
8
0
Order By: Relevance
“…Supervised learning models are trained on past data to recognize inlaying patterns and to make future predictions [22]. Based on these models, final decisions are made in sensitive areas of human life, such as whether a person will receive the requested credit, be classified as a repeat offender, be hired for a job, or be diagnosed with a particular disease.…”
Section: Relevance Of Fairness In Automated Decision Makingmentioning
confidence: 99%
See 4 more Smart Citations
“…Supervised learning models are trained on past data to recognize inlaying patterns and to make future predictions [22]. Based on these models, final decisions are made in sensitive areas of human life, such as whether a person will receive the requested credit, be classified as a repeat offender, be hired for a job, or be diagnosed with a particular disease.…”
Section: Relevance Of Fairness In Automated Decision Makingmentioning
confidence: 99%
“…Pre-processing mitigation strategies correspond to the stage of data collection in the ML pipeline and are used to ensure that the predefined protected feature does not impact the outcome negatively by modifying the feature space [20]. The biases inherent in the data itself are removed before model training to account for fair outcomes [22]. Bias mitigation at the stage of model development includes diverse approaches.…”
Section: Bias Mitigation Strategiesmentioning
confidence: 99%
See 3 more Smart Citations