Proceedings Eighth IEEE Symposium on Software Metrics 2002
DOI: 10.1109/metric.2002.1011343
|View full text |Cite
|
Sign up to set email alerts
|

What we have learned about fighting defects

Abstract: The

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

3
124
0
4

Year Published

2002
2002
2017
2017

Publication Types

Select...
6
2
2

Relationship

0
10

Authors

Journals

citations
Cited by 186 publications
(131 citation statements)
references
References 8 publications
3
124
0
4
Order By: Relevance
“…The CeBASE project defined the eWorkshop and has used the technology to collect valuable empirical evidence on defect reduction and COTS. [9] The rise of Agile Methods provides a fruitful area for such empirical research. This paper discusses the results of the first eWorkshop on Agile Methods sponsored by the Fraunhofer Center Maryland and North Carolina State University using the CeBASE eWorkshop technology.…”
Section: An Experience Base For Software Engineeringmentioning
confidence: 99%
“…The CeBASE project defined the eWorkshop and has used the technology to collect valuable empirical evidence on defect reduction and COTS. [9] The rise of Agile Methods provides a fruitful area for such empirical research. This paper discusses the results of the first eWorkshop on Agile Methods sponsored by the Fraunhofer Center Maryland and North Carolina State University using the CeBASE eWorkshop technology.…”
Section: An Experience Base For Software Engineeringmentioning
confidence: 99%
“…Similarly, primary detectors should be paired with low pf secondary detectors. While we hope test engineers are the most effective defect detectors, the available empirical evidence is, at best, anecdotal [8]. As shown here, much is known about the pf, pd, accuracy, ef f ort, precision of static code measures.…”
Section: Buy Not Build?mentioning
confidence: 91%
“…Looking at these figures, one can observe that the most efficient procedures are unfortunately hardly applicable to HPC scientific simulation software as they typically rely either on a formal definition of the underlying model that changes too fast to be useful in HPC scientific simulation software or on beta testing that does not apply when [20]. Approaches that are difficult to apply to HPC scientific simulation software such as beta testing at large scale or formal verification have been grayed.…”
Section: Proposed Testsmentioning
confidence: 99%