Proceedings of the 2014 International Symposium on Software Testing and Analysis 2014
DOI: 10.1145/2610384.2610393
|View full text |Cite
|
Sign up to set email alerts
|

Performance regression testing of concurrent classes

Abstract: Developers of thread-safe classes struggle with two opposing goals. The class must be correct, which requires synchronizing concurrent accesses, and the class should provide reasonable performance, which is difficult to realize in the presence of unnecessary synchronization. Validating the performance of a thread-safe class is challenging because it requires diverse workloads that use the class, because existing performance analysis techniques focus on individual bottleneck methods, and because reliably measur… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
30
0

Year Published

2014
2014
2023
2023

Publication Types

Select...
7
2

Relationship

1
8

Authors

Journals

citations
Cited by 69 publications
(31 citation statements)
references
References 72 publications
0
30
0
Order By: Relevance
“…In particular, performance testing employs statistical methods to detect difference in performance of two versions of the software [9], [6], [10]. However, these techniques are solely focused on detecting performance degradations.…”
Section: Related Workmentioning
confidence: 99%
“…In particular, performance testing employs statistical methods to detect difference in performance of two versions of the software [9], [6], [10]. However, these techniques are solely focused on detecting performance degradations.…”
Section: Related Workmentioning
confidence: 99%
“…Algorithmic denial of service [7] is a class of attacks that attempts to reduce the performance of a server by repeatedly sending data that exposes bad performance. SpeedGun [31] is a technique for automated performance regression testing of thread-safe classes. Our work shares the idea of generating input to trigger performance problems.…”
Section: Related Workmentioning
confidence: 99%
“…Our work shares the idea of generating input to trigger performance problems. EventBreak differs from [5], [7], and [31] by generating sequences of events instead of input data. Grechanik et al [13] describe a strategy for selecting test cases that may expose performance problems, assuming to have a large set of existing test inputs.…”
Section: Related Workmentioning
confidence: 99%
“…When concerning the performance of the concurrent program, some authors (for example, in [16]) analyze mostly the speed of execution of the program. However, there is a number of different metrics that can be used as an objective function for the evaluation of the performance of the system.…”
Section: B Aspects Of Efficiency (Main Outputs Of the Framework)mentioning
confidence: 99%