This paper presents a comparative analysis of two verification techniques: (1) formal verification of the system specification and (2) execution of FSM-derived test cases on the delivered product. It uses as a testbench a didactic example of a coffee machine and a work team composed of post-graduation students. The purpose is to analyze the advantages and drawbacks of each technique, define the kind of errors detect by each one and highlight the contributions to the development process.
This paper presents a field study on real errors found in space software requirements documents. The goal is to understand and characterize the most frequent types of requirement problems in this critical application domain. To classify the software requirement errors analyzed we initially used a well-known existing taxonomy that was later extended in order to allow a more thorough analysis. The results of the study show a high rate of requirement errors (9.5 errors per each 100 requirements), which is surprising if we consider that the focus of the work is critical embedded software. Besides the characterization of the most frequent types of errors, the paper also proposes a set of operators that define how to inject realistic errors in requirement documents. This may be used in several scenarios, including: evaluating and training reviewers, estimating the number of requirement errors in real specifications, defining checklists for quick requirement verification, and defining benchmarks for requirements specifications.
Poorly written requirements are a common source of software defects. In application areas like space systems, the cost of malfunctioning software can be very high. This way, assessing the quality of software requirements before coding is of utmost importance. This work proposes a systematic procedure for assessing software requirements for space systems that adopt the European Cooperation for Space Standardization (ECSS) standards. The main goal is to provide a low-cost, easy-to-use benchmarking procedure that can be applied during the software requirements review to guarantee that the requirements specifications comply with the ECSS standards. The benchmark includes two checklists that are composed by a set of questions to be applied to the requirements specification. It was applied to the software requirements specification for one of the services described in the ECSS Packet Utilization Standard (PUS). Results show that the proposed benchmark allows finding more with a low effort.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.