Variability models are broadly used to specify the configurable features of highly customizable software. In practice, they can be large, defining thousands of features with their dependencies and conflicts. In such cases, visualization techniques and automated analysis support are crucial for understanding the models. This paper contributes to this line of research by presenting a novel, probabilistic foundation for statistical reasoning about variability models. Our approach not only provides a new way to visualize, describe and interpret variability models, but it also supports the improvement of additional state-of-the-art methods for software product lines; for instance, providing exact computations where only approximations were available before, and increasing the sensitivity of existing analysis operations for variability models. We demonstrate the benefits of our approach using real case studies with up to 17,365 features, and written in two different languages (KConfig and feature models).
The software engineering community is rapidly adopting machine learning for transitioning modern-day software towards highly intelligent and self-learning systems. However, the software engineering community is still discovering new ways how machine learning can offer help for various software development life cycle stages. In this article, we present a study on the use of machine learning across various software development life cycle stages. The overall aim of this article is to investigate the relationship between software development life cycle stages, and machine learning tools, techniques, and types. We attempt a holistic investigation in part to answer the question of whether machine learning favors certain stages and/or certain techniques.
Implementing a change is a challenging task in complex, safety-critical, or long-living software systems. Developers need to identify which artifacts are affected to correctly and completely implement a change. Changes often require editing artifacts across the software system to the extent that several developers need to be involved. Crucially, a developer needs to know which artifacts under someone else's control have impact on her work task and, in turn, how her changes cascade to other artifacts, again, under someone else's control. These cross-task dependencies are especially important as they are a common cause of incomplete and incorrect change propagation and require explicit coordination. Along these lines the core research question in this paper is: how can we automatically detect crosstask dependencies and use them to assist the developer? We introduce an approach for mining such dependencies from past developer interactions with engineering artifacts as the basis for live recommending artifacts during change implementation. We show that our approach lists 67% of the correctly recommended artifacts within the top-10 results with real interaction data and tasks from the Mylyn project. The results demonstrate we are able to successfully find not only cross-task dependencies but also provide them to developers in a useful manner. Index Terms-cross-task dependencies, change impact assessment, developer interactions, software artifacts recommendation, Mylyn, Bugzilla.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.