2008
DOI: 10.1145/1378704.1378723
|View full text |Cite
|
Sign up to set email alerts
|

Wake up and smell the coffee

Abstract: Evaluation methodology underpins all innovation in experimental computer science. It requires relevant workloads, appropriate experimental design, and rigorous analysis. Unfortunately, methodology is not keeping pace with the changes in our field. The rise of managed languages such as Java, C#, and Ruby in the past decade and the imminent rise of commodity multicore architectures for the next decade pose new methodological challenges that are not yet widely understood. This paper explores the consequences of o… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
15
0

Year Published

2011
2011
2022
2022

Publication Types

Select...
4
3
2

Relationship

1
8

Authors

Journals

citations
Cited by 97 publications
(15 citation statements)
references
References 15 publications
0
15
0
Order By: Relevance
“…Here, the reality of interest is the software reality, and knowledge is generated by the means of examining or running programs. The most prominent method here is computational experiments-the study of algorithms by exposing their implementations to a wide variety of automatically generated stimuli and measuring the effort expended by the implementation as a function of stimulus parameters [6,28,29,37,48,67,72]; it is relevant to programming language research mostly in the study of implementation techniques.…”
Section: Methodsmentioning
confidence: 99%
“…Here, the reality of interest is the software reality, and knowledge is generated by the means of examining or running programs. The most prominent method here is computational experiments-the study of algorithms by exposing their implementations to a wide variety of automatically generated stimuli and measuring the effort expended by the implementation as a function of stimulus parameters [6,28,29,37,48,67,72]; it is relevant to programming language research mostly in the study of implementation techniques.…”
Section: Methodsmentioning
confidence: 99%
“…We then measure and report the subsequent iteration. This methodology greatly reduces non-determinism due to the adaptive optimizing compiler and improves underlying performance by about 5% compared to the prior replay methodology [14]. We run each benchmark 20 times (20 invocations) and in Table 4 we report the average and 95% confidence intervals using Student's t-distribution.…”
Section: Methodsmentioning
confidence: 99%
“…13, no. 5,2014 which differs only in the multiplicity, is roughly equivalent to Account account; meaning that account can hold either no or one account (see Section 3.4 for the important difference). Note that using multiplicities, both account and accounts have the same type Account; they differ only in their declared multiplicities.…”
Section: Collection Accounts;mentioning
confidence: 99%
“…To measure on steady state, we used the multi-iteration determinism method for benchmarking from Blackburn et al [5], which includes the following steps: JUnit 4.0 255 † excluding 10 that we had to remove because they could not be compiled with JUnit 4.0 § 25 of these tests fail both with and without multiplicities because they should normally be run with JUnit 4.7 (see text) $ excluding 2 that were removed because they contain an infinite loop (see text) Table 3 -Subject programs used in the evaluation.…”
Section: Steady-state Performancementioning
confidence: 99%