Abstract-Assisted requirements tracing is a process in which a human analyst validates candidate traces produced by an automated requirements tracing method or tool. The assisted requirements tracing process splits the difference between the commonly applied time-consuming, tedious, and error-prone manual tracing and the automated requirements tracing procedures that are a focal point of academic studies. In fact, in software assurance scenarios, assisted requirements tracing is the only way in which tracing can be at least partially automated. In this paper, we present the results of an extensive 12 month study of assisted tracing, conducted using three different tracing processes at two different sites. We describe the information collected about each study participant and their work on the tracing task, and apply statistical analysis to study which factors have the largest effect on the quality of the final trace.
Our research group recently discovered that human analysts, when asked to validate candidate traceability matrices, produce predictably imperfect results, in some cases less accurate than the starting candidate matrices. This discovery radically changes our understanding of how to design a fast, accurate and certifiable tracing process that can be implemented as part of software assurance activities. We present our vision for the new approach to achieving this goal. Further, we posit that human fallibility may impact other software engineering activities involving decision support tools.
Abstract-Human analysts working with results from automated traceability tools often make incorrect decisions that lead to lower quality final trace matrices. As the human must vet the results of trace tools for mission-and safetycritical systems, the hopes of developing expedient and accurate tracing procedures lies in understanding how analysts work with trace matrices. This paper describes a study to understand when and why humans make correct and incorrect decisions during tracing tasks through logs of analyst actions. In addition to the traditional measures of recall and precision to describe the accuracy of the results, we introduce and study new measures that focus on analyst work quality: potential recall, sensitivity, and effort distribution. We use these measures to visualize analyst progress towards the final trace matrix, identifying factors that may influence their performance and determining how actual tracing strategies, derived from analyst logs, affect results.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.