Proceedings of the 2018 26th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of 2018
DOI: 10.1145/3236024.3236039
|View full text |Cite
|
Sign up to set email alerts
|

Singularity: pattern fuzzing for worst case complexity

Abstract: We describe a new blackbox complexity testing technique for determining the worst-case asymptotic complexity of a given application. The key idea is to look for an input pattern Ðrather than a concrete inputÐ that maximizes the asymptotic resource usage of the target program. Because input patterns can be described concisely as programs in a restricted language, our method transforms the complexity testing problem to optimal program synthesis. In particular, we express these input patterns using a new model of… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
17
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 31 publications
(17 citation statements)
references
References 36 publications
0
17
0
Order By: Relevance
“…TortoiseFuzz [36] and Ankou [26] propose new coverage evaluation techinques for better seed scheduling. Some studies [21,29,37] have focused on detecting algorithmic complexity vulnerabilities based on new coverage metrics such as resource usage or execution path length.…”
Section: Related Workmentioning
confidence: 99%
“…TortoiseFuzz [36] and Ankou [26] propose new coverage evaluation techinques for better seed scheduling. Some studies [21,29,37] have focused on detecting algorithmic complexity vulnerabilities based on new coverage metrics such as resource usage or execution path length.…”
Section: Related Workmentioning
confidence: 99%
“…Rampart [24] targets the opposite use-case of algorithm attacks: it protects applications from exhaustive CPU exhaustion DoS attacks. Instead of finding concrete inputs that exhaust programs, Singularity [30] targets at a higher abstraction level: analyzing given program code it tries to find input patterns that exhaust the program application. The work of [25] synthesizes program code of network functions in order to find challenging network traffic configurations.…”
Section: Related Workmentioning
confidence: 99%
“…This paper makes the case for a data-driven approach to measuring and evaluating the performance of a network in an adaptive, self-driving manner. Indeed, existing work on automated benchmarking and fuzzing systems either (i) targets general computing systems and hence may not lend itself readily to be adopted in a networked setting [19,22,24,26,30], (ii) aims at verifying logical properties in networked systems related to policy-compliance of configurations and implementations and ignore performance [3,21], or (iii) requires human assistance and software source code [14,25] to guide the performance evaluations and experiments, relying on hand-crafted, and often proprietary, benchmark tools, inputs, and system settings [5,18,20]. We argue that in the context of selfdriving networks the performance evaluation tool itself must also be self-driving, taking into account the specialties of networked systems and the environment these systems are typically used in.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Previous work on algorithm complexity attacks has already shown methods for generating challenging, often called adversary, algorithms inputs [8,10,13,17,18]. With the help of these inputs the authors were able to improve algorithm performance and close security holes.…”
Section: Introductionmentioning
confidence: 99%