Our goal is to analyze the optimality of search strategies for use in systematic reviews of software engineering experiments. Studies retrieval is an important problem in any evidence-based discipline. This question has not been examined for evidence-based software engineering as yet. We have run several searches exercising different terms denoting experiments to evaluate their recall and precision. Based on our evaluation, we propose using a high recall strategy when there are plenty of resources or the results need to be exhaustive. For any other case, we propose optimal, or even acceptable, search strategies. As a secondary goal, we have analysed trends and weaknesses in terminology used in articles reporting software engineering experiments. We have found that it is impossible for a search strategy to retrieve 100% of the experiments of interest (as happens in other experimental disciplines), because of the shortage of reporting standards in the community.Keywords Evidence-based software engineering . Systematic review .
Context: This research deals with requirements elicitation technique selection for software product requirements and the overselection of open interviews. Objectives: This paper proposes and validates a framework to help requirements engineers select the most adequate elicitation techniques at any time. Method: We have explored both the existing underlying theory and the results of empirical research to build the framework. Based on this, we have deduced and put together justified proposals about the framework components. We have also had to add information not found in theoretical or empirical sources. In these cases, we drew on our own experience and expertise. Results: A new validated approach for requirements technique selection. This new approach selects tech-niques other than open interview, offers a wider range of possible techniques and captures more require-ments information. Conclusions: The framework is easily extensible and changeable. Whenever any theoretical or empirical evidence for an attribute, technique or adequacy value is unearthed, the information can be easily added to the framework.
Existing empirical studies on test-driven development (TDD) report different conclusions about its effects on quality and productivity. Very few of those studies are experiments conducted with software professionals in industry. We aim to analyse the effects of TDD on the external quality of the work done and the productivity of developers in an industrial setting. We conducted an experiment with 24 professionals from three different sites of a software organization. We chose a repeated-measures design, and asked subjects to implement TDD and incremental test last development (ITLD) in two simple tasks and a realistic application close to real-life complexity. To analyse our findings, we applied a repeatedmeasures general linear model procedure and a linear mixed effects procedure. We did not observe a statistical difference between the quality of the work done by subjects in both treatments. We observed that the subjects are more productive when they implement TDD on a simple task compared to ITLD, but the productivity drops significantly when applying TDD to a complex brownfield task. So, the task complexity significantly obscured the effect of TDD. Further evidence is necessary to conclude whether TDD is better or worse than ITLD in terms of external quality and productivity in an industrial setting. We found that experimental factors such as selection of tasks could dominate the findings in TDD studies.
E3 Ayse Tosun
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.