2010
DOI: 10.1145/1815933.1815944
|View full text |Cite
|
Sign up to set email alerts
|

Repeatability & workability evaluation of SIGMOD 2009

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
7
0

Year Published

2011
2011
2021
2021

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 12 publications
(7 citation statements)
references
References 2 publications
0
7
0
Order By: Relevance
“…Other areas in computer science introduced similar committees. For example, the SIGMOD conference uses so-called 'Repeatability' committees [Bonnet et al 2011;Manegold et al 2010], to verify the evaluations published in the conference. They have two goals: evaluate the 'repeatability' (independently reproducing the evaluations) and the 'workability' (exploring changes to evaluation's parameters).…”
Section: Artifact Evaluation Committeesmentioning
confidence: 99%
“…Other areas in computer science introduced similar committees. For example, the SIGMOD conference uses so-called 'Repeatability' committees [Bonnet et al 2011;Manegold et al 2010], to verify the evaluations published in the conference. They have two goals: evaluate the 'repeatability' (independently reproducing the evaluations) and the 'workability' (exploring changes to evaluation's parameters).…”
Section: Artifact Evaluation Committeesmentioning
confidence: 99%
“…To put it bluntly, almost all Asian authors participate in the repeatability process, while few American authors do. Some American authors have complained that the process requires too much work for the benefit derived [2], but we believe that several observations can improve this cost/benefit calculation 1. [more benefit] repeatable and workable experiments bring several benefits to a research group besides an objective seal of quality: a) higher quality software resulting from the discipline of building repeatable code b) an improved ability to train newcomers to a project by having them "play with the system" 2.…”
Section: Participationmentioning
confidence: 99%
“…The assessments of the repeatability process conducted in 2008 and 2009 pointed out several problems linked with reviewing experimental work [2,3]. There are obvious barriers to sharing the data and software needed to repeat experiments (e.g., private data sets, IP/licensing issues, specific hardware).…”
Section: Introductionmentioning
confidence: 99%
“…This has already resulted in a number of workshops, special issues, and software tools [17,3,18,22,30,1,13,6]. Academic institutions such as ETH in Switzerland, funding agencies, conferences and journals have all pushed for authors to include reproducible results in their publications [10,25,5,19,20,33,4,12].…”
mentioning
confidence: 99%
“…Within computer science, the database community has been a pioneer in the adoption of reproducible experiments [20,19,5]: SIG-MOD has had a repeatability committee since 2008, and starting in 2012, VLDB will also provide repeatability evaluation. As reproducible computational experiments become ubiquitous and are shared across different scientific domains, the database community is uniquely qualified to contribute to tools that play on them: these experiments can, after all, be viewed as data items that can be queried, modified, executed, and visualized [15].…”
mentioning
confidence: 99%