2021
DOI: 10.7717/peerj-cs.548
|View full text |Cite
|
Sign up to set email alerts
|

Using application benchmark call graphs to quantify and improve the practical relevance of microbenchmark suites

Abstract: Performance problems in applications should ideally be detected as soon as they occur, i.e., directly when the causing code modification is added to the code repository. To this end, complex and cost-intensive application benchmarks or lightweight but less relevant microbenchmarks can be added to existing build pipelines to ensure performance goals. In this paper, we show how the practical relevance of microbenchmark suites can be improved and verified based on the application flow during an application benchm… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
2

Relationship

2
4

Authors

Journals

citations
Cited by 6 publications
(5 citation statements)
references
References 72 publications
0
5
0
Order By: Relevance
“…Due to their lightweight nature, it has been proposed to use microbenchmarks in CI/CD pipelines to detect performance regressions automatically after a number of code changes, e.g., [16,17,27]. As such pipelines are typically executed on cloud VMs, similar considerations as discussed above are necessary to handle cloud performance variability.…”
Section: Applicationmentioning
confidence: 99%
See 3 more Smart Citations
“…Due to their lightweight nature, it has been proposed to use microbenchmarks in CI/CD pipelines to detect performance regressions automatically after a number of code changes, e.g., [16,17,27]. As such pipelines are typically executed on cloud VMs, similar considerations as discussed above are necessary to handle cloud performance variability.…”
Section: Applicationmentioning
confidence: 99%
“…This results in processing the HTTP requests fully and calling all involved middleware steps for the requested route. All microbenchmarks create a deliberate overlap between different code paths to simulate realistic suites with redundancies [17]. Thus, the injected performance issues can be detected by multiple microbenchmarks.…”
Section: Microbenchmark Suitementioning
confidence: 99%
See 2 more Smart Citations
“…For example, such benchmarks exist for the JVM, such as SPECjvm (Standard Performance Evaluation Corporation (SPEC) 2008), DaCapo (Blackburn et al 2006), Da Capo con Scala (Sewe et al 2011), and Renaissance (Prokopec et al 2019). Moreover, Grambow et al (2021) recently employed application benchmark traces to improve microbenchmark suites. However, it is unclear how to map from microbenchmark changes to application benchmark changes.…”
Section: What Is An Important Performance Change?mentioning
confidence: 99%