To capitalize upon the benefits of software reuse, an efficient selection among candidate reusable assets should be performed in terms of functional fitness and adaptability. The reusability of assets is usually measured through reusability indices. However, these do not capture all facets of reusability, such as structural characteristics, external quality attributes, and documentation. In this paper, we propose a reusability index (REI) as a synthesis of various software metrics and evaluate its ability to quantify reuse, based on IEEE Standard on Software MetricsValidity. The proposed index is compared with existing ones through a case study on 80 reusable open-source assets. To illustrate the applicability of the proposed index, we performed a pilot study, where real-world reuse decisions have been compared with decisions imposed by the use of metrics (including REI). The results of the study suggest that the proposed index presents the highest predictive and discriminative power; it is the most consistent in ranking reusable assets and the most strongly correlated to their levels of reuse. The findings of the paper are discussed to understand the most important aspects in reusability assessment (interpretation of results), and interesting implications for research and practice are provided. 1
Despite the extensive adoption of crowdsourcing for the timely, cost-effective, and high-quality completion of software development tasks, a large number of crowdsourced challenges are not able to acquire a winning solution, on time, and within the desired cost and quality thresholds. A possible reason for this is that we currently lack a systematic approach that would aid software managers during the process of designing software development tasks that will be crowdsourced. This paper attempts to extend the current knowledge on designing crowdsourced software development tasks, by empirically answering the following management questions: (a) what type of projects should be crowdsourced; (b) why should one crowdsource-in terms of acquired benefits; (c) where should one crowdsource-in terms of application domain; (d) when to crowdsource-referring to the time period of the year; (e) who will win or participate in the contest; and (f) how to crowdsource (define contest duration, prize, type of contest etc.) to acquire the maximum benefits-depending on the goal of crowdsourcing. To answer the aforementioned questions, we have performed a case study on 2,209 software development tasks crowdsourced through TopCoder platform. The results suggest that there are significant differences in the level to which crowdsourcing goals are reached, across different software development activities. Based on this observation we suggest that software managers should prioritize the goals of crowdsourcing, decide carefully upon the activity to be crowdsourced and then define the settings of the task.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.