The requirements traceability matrix (RTM) supports many software engineering and software verification and validation (V&V) activities such as change impact analysis, reverse engineering, reuse, and regression testing. The generation of RTMs is tedious and error-prone, though. Thus RTMs are often not generated or maintained. Automated techniques have been developed to generate candidate RTMs with some success. Automating the process can save time and potentially improve the quality of the results. When using RTMs to support the V&V of mission-or safety-critical systems, however, a human analyst is required to vet the candidate RTMs. The focus thus becomes the quality of the final RTM. This thesis introduces an experimental framework for studying human interactions with decision support software and reports on the results from a study which applies the framework to investigate how human analysts perform when vetting candidate RTMs generated by automated methods. Specifically, a study was undertaken at two universities and had 33 participants analyze RTMs of varying accuracy for a Java code formatter program. The study found that analyst behavior differs depending on the initial candidate RTM given to the analyst, but that all analysts tend to converge their final RTMs toward a hot spot in the recall-precision space.iv Acknowledgements
Abstract-Assisted requirements tracing is a process in which a human analyst validates candidate traces produced by an automated requirements tracing method or tool. The assisted requirements tracing process splits the difference between the commonly applied time-consuming, tedious, and error-prone manual tracing and the automated requirements tracing procedures that are a focal point of academic studies. In fact, in software assurance scenarios, assisted requirements tracing is the only way in which tracing can be at least partially automated. In this paper, we present the results of an extensive 12 month study of assisted tracing, conducted using three different tracing processes at two different sites. We describe the information collected about each study participant and their work on the tracing task, and apply statistical analysis to study which factors have the largest effect on the quality of the final trace.
Our research group recently discovered that human analysts, when asked to validate candidate traceability matrices, produce predictably imperfect results, in some cases less accurate than the starting candidate matrices. This discovery radically changes our understanding of how to design a fast, accurate and certifiable tracing process that can be implemented as part of software assurance activities. We present our vision for the new approach to achieving this goal. Further, we posit that human fallibility may impact other software engineering activities involving decision support tools.
The requirements traceability matrix (RTM) supports many software engineering and software verification and validation (V&V) activities such as change impact analysis, reverse engineering, reuse, and regression testing. The generation of RTMs is tedious and error-prone, though. Thus RTMs are often not generated or maintained. Automated techniques have been developed to generate candidate RTMs with some success. Automating the process can save time and potentially improve the quality of the results. When using RTMs to support the V&V of mission-or safety-critical systems, however, a human analyst is required to vet the candidate RTMs. The focus thus becomes the quality of the final RTM. This thesis introduces an experimental framework for studying human interactions with decision support software and reports on the results from a study which applies the framework to investigate how human analysts perform when vetting candidate RTMs generated by automated methods. Specifically, a study was undertaken at two universities and had 33 participants analyze RTMs of varying accuracy for a Java code formatter program. The study found that analyst behavior differs depending on the initial candidate RTM given to the analyst, but that all analysts tend to converge their final RTMs toward a hot spot in the recall-precision space.iv Acknowledgements
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.