2016 IEEE International Conference on Software Testing, Verification and Validation (ICST) 2016
DOI: 10.1109/icst.2016.15
|View full text |Cite
|
Sign up to set email alerts
|

A Controlled Experiment in Testing of Safety-Critical Embedded Software

Abstract: In engineering of safety critical systems, regulatory standards often put requirements on both traceable requirements-based testing, and structural coverage on system software units. Automated test input generation techniques can be used to generate test data to cover the structural aspects of a program. However, there is no conclusive evidence on how automated test input generation compares to manual test generation, or how test case generation based on the program structure compares to specification-based te… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
5
0

Year Published

2017
2017
2021
2021

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 13 publications
(5 citation statements)
references
References 28 publications
0
5
0
Order By: Relevance
“…Juristo et al [20] collected testing experiments in 2004, but only a small number of the reported studies involved human subjects (e.g., Myers et al [27], Basili et al [5]). More recently, experiments evaluating test generator tools were performed: Fraser et al [16] designed an experiment for testing an existing unit either manually or with the help of EvoSuite; Rojas et al [37] investigated using test generators during development; Ramler et al [36] compared tests wri en by the participants with tests generated by the researchers using Randoop; and Enoiu et al [13] analyzed tests created manually or generated with a tool for PLCs. ese experiments used mutation score or correct and faulty versions to compute fault detection capability.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Juristo et al [20] collected testing experiments in 2004, but only a small number of the reported studies involved human subjects (e.g., Myers et al [27], Basili et al [5]). More recently, experiments evaluating test generator tools were performed: Fraser et al [16] designed an experiment for testing an existing unit either manually or with the help of EvoSuite; Rojas et al [37] investigated using test generators during development; Ramler et al [36] compared tests wri en by the participants with tests generated by the researchers using Randoop; and Enoiu et al [13] analyzed tests created manually or generated with a tool for PLCs. ese experiments used mutation score or correct and faulty versions to compute fault detection capability.…”
Section: Related Workmentioning
confidence: 99%
“…In most of the studies, the tools were evaluated in a technology-oriented se ing (e.g., [23,39,44]). Only a limited number of studies involved human participants performing prescribed tasks with the tools [13,16,37].…”
Section: Introductionmentioning
confidence: 99%
“…Doganay et al [14] conduct an evaluation of a hill climbing algorithm on industrial code derived from Function Block Diagrams developed by their industrial partners. Enoiu et al [15] conducted an experimental evaluation, also on industrial code, and with master students as experimental subjects.…”
Section: Related Workmentioning
confidence: 99%
“…Model-Based Testing (MBT) has shown good results of producing effective test suites to reveal faults [16]. For a typical MBT approach, abstract test cases are generated from models first, e.g., using structural coverage criteria (e.g., all state coverage) [17,18].…”
Section: Related Workmentioning
confidence: 99%