Proceedings of the 3rd ACM SIGSOFT International Workshop on Software Analytics 2017
DOI: 10.1145/3121257.3121262
|View full text |Cite
|
Sign up to set email alerts
|

Predicting rankings of software verification tools

Abstract: So ware veri cation competitions, such as the annual SV-COMP, evaluate so ware veri cation tools with respect to their e ectivity and e ciency. Typically, the outcome of a competition is a (possibly category-speci c) ranking of the tools. For many applications, such as building portfolio solvers, it would be desirable to have an idea of the (relative) performance of veri cation tools on a given veri cation task beforehand, i.e., prior to actually running all tools on the task.In this paper, we present a machin… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
20
0

Year Published

2018
2018
2020
2020

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 30 publications
(20 citation statements)
references
References 22 publications
0
20
0
Order By: Relevance
“…configurations. For this, we applied an extension of our rank prediction approach introduced in [7]. Basically, for a given verification task we predict an ordering of CPAchecker configurations, and then sequentially run these configurations.…”
Section: Verification Approachmentioning
confidence: 99%
See 2 more Smart Citations
“…configurations. For this, we applied an extension of our rank prediction approach introduced in [7]. Basically, for a given verification task we predict an ordering of CPAchecker configurations, and then sequentially run these configurations.…”
Section: Verification Approachmentioning
confidence: 99%
“…By employing SVMs, we are able to choose a kernel function 2 (similar to Weisfeiler-Lehman kernels [12]) that is specifically designed for graph substructures. However, the function proposed in [7] needed to be computed between the input instance X (the graph of a verification task) and every training sample Y , which can be quite costly in practice. As a consequence, we have re-implemented this approach and now compute Weisfeiler-Lehman-based features of single graphs.…”
Section: Verification Approachmentioning
confidence: 99%
See 1 more Smart Citation
“…The advantage of using a diverse set of models is that we can identify the most suitable application areas. Furthermore, we compare lower lever parameters of CEGAR as opposed to most experiments in the literature [11,19,36,37], where different algorithms or tools are compared. We formulate and address a research question related to the effectiveness and efficiency of each of our contributions.…”
Section: Experimental Evaluationmentioning
confidence: 99%
“…Experimental evaluation There are many works in the literature that focus on experimental evaluation and comparison of model checking algorithms [11,19,36,37]. However, they usually focus on a certain domain (e.g., SV-COMP).…”
Section: Multiple Refinements For a Counterexamplementioning
confidence: 99%