Abstract-Classifying requirements into functional requirements (FR) and non-functional ones (NFR) is an important task in requirements engineering. However, automated classification of requirements written in natural language is not straightforward, due to the variability of natural language and the absence of a controlled vocabulary. This paper investigates how automated classification of requirements into FR and NFR can be improved and how well several machine learning approaches work in this context. We contribute an approach for preprocessing requirements that standardizes and normalizes requirements before applying classification algorithms. Further, we report on how well several existing machine learning methods perform for automated classification of NFRs into sub-categories such as usability, availability, or performance. Our study is performed on 625 requirements provided by the OpenScience tera-PROMISE repository. We found that our preprocessing improved the performance of an existing classification method. We further found significant differences in the performance of approaches such as Latent Dirichlet Allocation, Biterm Topic Modeling, or Naïve Bayes for the sub-classification of NFRs.
Videos are one of the best documentation options for a rich and effective communication. They allow experiencing the overall context of a situation by representing concrete realizations of certain requirements. Despite 35 years of research on integrating videos in requirements engineering (RE), videos are not an established documentation option in terms of RE best practices. Several approaches use videos but omit the details about how to produce them. Software professionals lack knowledge on how to communicate visually with videos since they are not directors. Therefore, they do not necessarily have the required skills neither to produce good videos in general nor to deduce what constitutes a good video for an existing approach. The discipline of video production provides numerous generic guidelines that represent best practices on how to produce a good video with specific characteristics. We propose to analyze this existing know-how to learn what constitutes a good video for visual communication. As a plan of action, we suggest a literature study of video production guidelines. We expect to identify quality characteristics of good videos in order to derive a quality model. Software professionals may use such a quality model for videos as an orientation for planning, shooting, post-processing, and viewing a video. Thus, we want to encourage and enable software professionals to produce good videos at moderate costs, yet sufficient quality.
Abstract-Requirements engineering provides several practices to analyze how a user wants to interact with a future software. Mockups, prototypes, and scenarios are suitable to understand usability issues and user requirements early. Nevertheless, users are often dissatisfied with the usability of a resulting software. Apparently, previously explored information was lost or no longer accessible during the development phase.Scenarios are one effective practice to describe behavior. However, they are commonly notated in natural language which is often improper to capture and communicate interaction knowledge comprehensible to developers and users. The dynamic aspect of interaction is lost if only static descriptions are used.Digital prototyping enables the creation of interactive prototypes by adding responsive controls to hand-or digitally drawn mockups. We propose to capture the events of these controls to obtain a representation of the interaction. From this data, we generate videos, which demonstrate interaction sequences, as additional support for textual scenarios.Variants of scenarios can be created by modifying the captured event sequences and mockups. Any change is unproblematic since videos only need to be regenerated. Thus, we achieve video as a by-product of digital prototyping. This reduces the effort compared to video recording such as screencasts. A first evaluation showed that such a generated video supports a faster understanding of a textual scenario compared to static mockups.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.