Companion Proceedings of the 36th International Conference on Software Engineering 2014
DOI: 10.1145/2591062.2591164
|View full text |Cite
|
Sign up to set email alerts
|

Comparing test quality measures for assessing student-written tests

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
21
0

Year Published

2015
2015
2024
2024

Publication Types

Select...
3
2
1

Relationship

1
5

Authors

Journals

citations
Cited by 26 publications
(21 citation statements)
references
References 13 publications
0
21
0
Order By: Relevance
“…The use of automated assessment tools [12,5,20,6] and mutation analysis techniques [1,19] have been examined as a way to assess the quality of student written tests. Most of these techniques measure 'code coverage ability' of student-written tests, but recent work has shown that such tools might produce an overestimation of studentwritten test quality [7].…”
Section: Related Workmentioning
confidence: 99%
See 4 more Smart Citations
“…The use of automated assessment tools [12,5,20,6] and mutation analysis techniques [1,19] have been examined as a way to assess the quality of student written tests. Most of these techniques measure 'code coverage ability' of student-written tests, but recent work has shown that such tools might produce an overestimation of studentwritten test quality [7].…”
Section: Related Workmentioning
confidence: 99%
“…Some have suggested that the quality of a test can be assessed by its ability to detect bugs or faults in a software program rather than by looking at its coverage of execution paths [7]. One technique for measuring the quality of testing is the 'all-pairs' technique proposed by Goldwasser in 2002 [10] and further developed by Edwards et al [9,7].…”
Section: Related Workmentioning
confidence: 99%
See 3 more Smart Citations