Proceedings of the Second International Conference on Distributed Event-Based Systems 2008
DOI: 10.1145/1385989.1386030
|View full text |Cite
|
Sign up to set email alerts
|

A framework for performance evaluation of complex event processing systems

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
12
0

Year Published

2009
2009
2021
2021

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 25 publications
(12 citation statements)
references
References 4 publications
0
12
0
Order By: Relevance
“…Due to the rapid growth of event processing technology, large variety of application domains, and lack of standards [13], performance metrics are subject to various interpretations, often leading to incomparable product benchmarks. Among the most commonly used performance indicators are throughput and latency (response time), as well as scalability, security, correctness and other non-functional requirements.…”
Section: Pattern Performance Objectivesmentioning
confidence: 99%
See 1 more Smart Citation
“…Due to the rapid growth of event processing technology, large variety of application domains, and lack of standards [13], performance metrics are subject to various interpretations, often leading to incomparable product benchmarks. Among the most commonly used performance indicators are throughput and latency (response time), as well as scalability, security, correctness and other non-functional requirements.…”
Section: Pattern Performance Objectivesmentioning
confidence: 99%
“…We use events per second (event/s) to measure throughput. Similarly to [2] and [13], we define a system latency as a delay between the last input event causing a certain scenario detection and the detection itself, resulting in derivation of an output event. An application's latency is usually measured in milliseconds (ms).…”
Section: Pattern Performance Objectivesmentioning
confidence: 99%
“…Several test harnesses and benchmarks for different EBS were published, e.g., [8,5,7]. However, previous work in the area of benchmarking mostly focuses on the design and development of test frameworks, but not on the definition of workloads.…”
Section: Related Workmentioning
confidence: 99%
“…The input streams data were generated and submitted using the FINCoS framework [15], a set of benchmarking tools we have developed for assessing performance of CEP engines. Both the load generation components and the event processing engines under test ran in a single machine to eliminate network latencies and jitter.…”
Section: Tests Setupmentioning
confidence: 99%