2022
DOI: 10.1007/s10664-021-10102-5
|View full text |Cite|
|
Sign up to set email alerts
|

Uniform and scalable sampling of highly configurable systems

Abstract: Many analyses on configurable software systems are intractable when confronted with colossal and highly-constrained configuration spaces. These analyses could instead use statistical inference, where a tractable sample accurately predicts results for the entire space. To do so, the laws of statistical inference requires each member of the population to be equally likely to be included in the sample, i.e., the sampling process needs to be “uniform”. SAT-samplers have been developed to generate uniform random sa… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
15
0

Year Published

2022
2022
2025
2025

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 22 publications
(15 citation statements)
references
References 53 publications
0
15
0
Order By: Relevance
“…They participate in non-regression testing. However, SPL testing [15] and ML testing [55] are inherently difficult activities that we do not yet address; Many algorithms built into SPL are too resource-intensive (CPU, memory, and time) to consider sampling techniques [23]. Nonetheless, we believe that some work on SPL configurations opens up new opportunities to help build portfolios for automatic algorithm selection [29].…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…They participate in non-regression testing. However, SPL testing [15] and ML testing [55] are inherently difficult activities that we do not yet address; Many algorithms built into SPL are too resource-intensive (CPU, memory, and time) to consider sampling techniques [23]. Nonetheless, we believe that some work on SPL configurations opens up new opportunities to help build portfolios for automatic algorithm selection [29].…”
Section: Discussionmentioning
confidence: 99%
“…Then the main issue is not to determine the configuration workflow that best suits the actors according to the previous configurations [45], but to guide them in composing a solution for an unprecedentedly studied problem. Concurrently, it is not a question of generating random samples [23], whose relevance could not be precisely verified (e.g., stuffing the SPL with all the available algorithms and pre-processing components from the literature). Instead, it is more a matter of enriching our knowledge by systematically studying new validated configurations.…”
Section: Introductionmentioning
confidence: 99%
“…INFORMEDQX Runtime Performance. We have evaluated the performance of INFORMEDQX compared to QUICKXPLAIN on the basis of the Linux-2.6.33.3 configuration knowledge base taken from Diverso Lab's benchmark 1 (Heradio et al 2022). The characteristics of this knowledge base are the following: #features = 6,467; #relationships = 6,322; and #cross-tree constraints = 7,650.…”
Section: Analysis Of Informedqxmentioning
confidence: 99%
“…Being strictly more succinct means that some exponential space savings can be achieved by targeting the d-DNNF language instead of the OBDD one. These properties of d-DNNF circuits, the various sets of tractable queries supported by d-DNNF circuits and the existence of "efficient" compilers, explain why the d-DNNF language, developed two decades ago for some AI purposes (especially, model-based diagnosis) has been spreading over a number of domains that go beyond AI; in particular, theoretical computer science, database theory, and more recently software engineering [47,48].…”
Section: D-dnnf-based Reasoning For Feature Modelsmentioning
confidence: 99%