“…We try to answer RQ1 through this section by highlight current CrowdRE research focus through literature. Run-time Feedback [71,22,31,41,72,96] Emerging requirements [15,42,57] Design Rationale [37] Modelling &…”
Section: The Crowd In the Requirements Engineering Activitiesmentioning
confidence: 99%
“…By crowd [48,54,72], by textual data analysis [13,20,25,33,41,42,49,51,52,62,64,80,86,88,89], by prototyping [22], sentiment analysis [21,79], image and unstructured data analysis [21,73] 22…”
Section: Analysis and Validationmentioning
confidence: 99%
“…There are diverse research efforts on crowd requirements engineering in the surveyed literature using AI techniques. For example, there are works on using natural language processing (NLP) techniques in classifying, clustering, categorizing users' feedback into feature requests, bugs, or simple compliments [16,37,41,53,69]. Analysis of user feedbacks and runtime human-computer interactions are experimented using NLP and text mining techniques.…”
Section: A Research Map For Intelligent Crowdrementioning
Software systems are the joint creative products of multiple stakeholders, including both designers and users, based on their perception, knowledge and personal preferences of the application context. The rapid rise in the use of Internet, mobile and social media applications make it even more possible to provide channels to link a large pool of highly diversified and physically distributed designers and end users, the crowd. Converging the knowledge of designers and end users in requirements engineering process is essential for the success of software systems. In this paper, we report the findings of a survey of the literature on crowd-based requirements engineering research. It helps us understand the current research achievements, the areas of concentration, and how requirements related activities can be enhanced by crowd intelligence. Based on the survey, we propose a general research map and suggest the possible future roles of crowd intelligence in requirements engineering.
“…We try to answer RQ1 through this section by highlight current CrowdRE research focus through literature. Run-time Feedback [71,22,31,41,72,96] Emerging requirements [15,42,57] Design Rationale [37] Modelling &…”
Section: The Crowd In the Requirements Engineering Activitiesmentioning
confidence: 99%
“…By crowd [48,54,72], by textual data analysis [13,20,25,33,41,42,49,51,52,62,64,80,86,88,89], by prototyping [22], sentiment analysis [21,79], image and unstructured data analysis [21,73] 22…”
Section: Analysis and Validationmentioning
confidence: 99%
“…There are diverse research efforts on crowd requirements engineering in the surveyed literature using AI techniques. For example, there are works on using natural language processing (NLP) techniques in classifying, clustering, categorizing users' feedback into feature requests, bugs, or simple compliments [16,37,41,53,69]. Analysis of user feedbacks and runtime human-computer interactions are experimented using NLP and text mining techniques.…”
Section: A Research Map For Intelligent Crowdrementioning
Software systems are the joint creative products of multiple stakeholders, including both designers and users, based on their perception, knowledge and personal preferences of the application context. The rapid rise in the use of Internet, mobile and social media applications make it even more possible to provide channels to link a large pool of highly diversified and physically distributed designers and end users, the crowd. Converging the knowledge of designers and end users in requirements engineering process is essential for the success of software systems. In this paper, we report the findings of a survey of the literature on crowd-based requirements engineering research. It helps us understand the current research achievements, the areas of concentration, and how requirements related activities can be enhanced by crowd intelligence. Based on the survey, we propose a general research map and suggest the possible future roles of crowd intelligence in requirements engineering.
“…The classifier learns how key indicator terms in textual requirements map onto different categories such as performance and security. Casamayor et al (2010), Riaz et al (2014), and Li et al (2018) propose similar techniques based on keywords to predict categories for different requirements. Guzman et al (2017) and Williams and Mahmoud (2017) mine requirements from twitter feeds through a combination of ML and NLP preprocessing.…”
A simple but important task during the analysis of a textual requirements specification is to determine which statements in the specification represent requirements. In principle, by following suitable writing and markup conventions, one can provide an immediate and unequivocal demarcation of requirements at the time a specification is being developed. However, neither the presence nor a fully accurate enforcement of such conventions is guaranteed. The result is that, in many practical situations, analysts end up resorting to after-the-fact reviews for sifting requirements from other material in a requirements specification. This is both tedious and time-consuming. We propose an automated approach for demarcating requirements in free-form requirements specifications. The approach, which is based on machine learning, can be applied to a wide variety of specifications in different domains and with different writing styles. We train and evaluate our approach over an independently labeled dataset comprised of 33 industrial requirements specifications. Over this dataset, our approach yields an average precision of 81.2% and an average recall of 95.7%. Compared to simple baselines that demarcate requirements based on the presence of modal verbs and identifiers, our approach leads to an average gain of 16.4% in precision and 25.5% in recall. We collect and analyze expert feedback on the demarcations produced by our approach for industrial requirements specifications. The results indicate that experts find our approach useful and efficient in practice. We developed a prototype tool, named DemaRQ, in support of our approach. To facilitate replication, we make available to the research community this prototype tool alongside the non-proprietary portion of our training data.
“…ere are various activities (i.e., elicitation, specification, validation, and management) associated with it that need to be effectively performed to somehow guarantee developing a quality software [1][2][3][4][5][6][7][8]. ere has been a rapid surge in the RE community of effectively using the diverse online user feedback offered on various social media/ online platforms, for instance, the Stack Overflow Q&A site [9], Twitter, bug reporting systems, and mobile app stores (i.e., Google's Play Store and Apple's Play Store) [10] as amongst the latent and rich sources of diverse user requirements [11,12]. SO Q&A online programming community is commonly used by diverse programmers for learning, problem solving, and sharing knowledge on various issues of software development.…”
Context. The improvements made in the last couple of decades in the requirements engineering (RE) processes and methods have witnessed a rapid rise in effectively using diverse machine learning (ML) techniques to resolve several multifaceted RE issues. One such challenging issue is the effective identification and classification of the software requirements on Stack Overflow (SO) for building quality systems. The appropriateness of ML-based techniques to tackle this issue has revealed quite substantial results, much effective than those produced by the usual available natural language processing (NLP) techniques. Nonetheless, a complete, systematic, and detailed comprehension of these ML based techniques is considerably scarce. Objective. To identify or recognize and classify the kinds of ML algorithms used for software requirements identification primarily on SO. Method. This paper reports a systematic literature review (SLR) collecting empirical evidence published up to May 2020. Results. This SLR study found 2,484 published papers related to RE and SO. The data extraction process of the SLR showed that (1) Latent Dirichlet Allocation (LDA) topic modeling is among the widely used ML algorithm in the selected studies and (2) precision and recall are amongst the most commonly utilized evaluation methods for measuring the performance of these ML algorithms. Conclusion. Our SLR study revealed that while ML algorithms have phenomenal capabilities of identifying the software requirements on SO, they still are confronted with various open problems/issues that will eventually limit their practical applications and performances. Our SLR study calls for the need of close collaboration venture between the RE and ML communities/researchers to handle the open issues confronted in the development of some real world machine learning-based quality systems.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.