Goal-oriented methods have increasingly been recognised as an effective means for eliciting, elaborating, analysing and specifying software requirements. A key activity in these approaches is the elaboration of a correct and complete set of opertional requirements, in the form of pre-and trigger-conditions, that guarantee the system goals. Few existing approaches provide support for this crucial task and mainly rely on significant effort and expertise of the engineer.In this paper we propose a tool-based framework that combines model checking, inductive learning and scenarios for elaborating operational requirements from goal models. This is an iterative process that requires the engineer to identify positive and negative scenarios from counterexamples to the goals, generated using model checking, and to select operational requirements from suggestions computed by inductive learning.
This paper considers the problem of assumptions refinement in the context of unrealizable specifications for reactive systems. We propose a new counterstrategy-guided synthesis approach for GR(1) specifications based on Craig's interpolants. Our interpolation-based method identifies causes for unrealizability and computes assumptions that directly target unrealizable cores, without the need for user input. Thereby, we discuss how this property reduces the maximum number of steps needed to converge to realizability compared with other techniques. We describe properties of interpolants that yield helpful GR(1) assumptions and prove the soundness of the results. Finally, we demonstrate that our approach yields weaker assumptions than baseline techniques, and finds solutions in case studies that are unsolvable via existing techniques.
Missing requirements are known to be among the major causes of software failure. They often result from a natural inclination to conceive over-ideal systems where the software-to-be and its environment always behave as expected. Obstacle analysis is a goal-anchored form of risk analysis whereby exceptional conditions that may obstruct system goals are identified, assessed and resolved to produce complete requirements. Various techniques have been proposed for identifying obstacle conditions systematically. Among these, the formal ones have limited applicability or are costly to automate. This paper describes a tool-supported technique for generating a set of obstacle conditions guaranteed to be complete and consistent with respect to the known domain properties. The approach relies on a novel combination of model checking and learning technologies. Obstacles are iteratively learned from counterexample and witness traces produced by model checking against a goal and converted into positive and negative examples, respectively. A comparative evaluation is provided with respect to published results on the manual derivation of obstacles in a real safety-critical system for which failures have been reported
Goal-oriented requirements engineering approaches propose capturing how a system should behave through the specification of high-level goals, from which requirements can then be systematically derived. Goals may however admit subtle situations that make them diverge, i.e., not be satisfiable as a whole under specific circumstances feasible within the domain, called boundary conditions. While previous work allows one to identify boundary conditions for conflicting goals written in LTL, it does so through a pattern-based approach, that supports a limited set of patterns, and only produces pre-determined formulations of boundary conditions.We present a novel automated approach to compute boundary conditions for general classes of conflicting goals expressed in LTL, using a tableaux-based LTL satisfiability procedure. A tableau for an LTL formula is a finite representation of all its satisfying models, which we process to produce boundary conditions that violate the formula, indicating divergence situations. We show that our technique can automatically produce boundary conditions that are more general than those obtainable through existing previous pattern-based approaches, and can also generate boundary conditions for goals that are not captured by these patterns.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.