Web services have emerged as an accessible technology with the standard 'Extensible Mark Up' (XML) language, which is known as 'Web Services Description Language' WSDL. Web services have become a promising technology to promote the interrelationship between service providers and users. Web services users' trust is measured by quality metrics. Web service quality metrics vary in many benchmark datasets used in the existing studies. The selection of a benchmark dataset is problematic to classify and retest web services. This paper proposes a method to rank web services quality metrics for the selection of benchmark web services datasets. To measure the diversity in quality metrics, factor analysis with Varimax rotation and scree plot is a well-established method. We use factor analysis to determine percentage variance among principal factors of four benchmark datasets. Our results showed that the two-factor solution explained 94.501, 76.524, and 45.009% variances in datasets A, B, and D, respectively. A three-factor solution explained 85.085% variance in dataset C. Reliability, and response time quality metrics were predicted as the most dominating quality metrics that contributed to explain the percentage variance in four datasets. Our proposed web metric ranking (WMR) method resulted in reliability as the topmost web metric with (57.62%) score and latency web metric at the bottom-most with (3.60%) score. The proposed WMR method showed a high (96.17%) ranking precision. Obtained results verified that factor solutions after reducing the dimensions could be generalized and used in the quality improvement of web services. In future works, the authors plan to focus on a dataset with dominating quality metrics to perform regression testing of web services. INDEX TERMS Factor analysis, quality metrics, rotated loading, reliability, response time, regression testing, web services.
Measuring and estimating the reusability of software components is important towards finding reusable candidates. Researchers have shown that software metrics can be effectively used to assess software reusability. This work provides a systematic literature review to investigate the main factors that influence software reusability and how these identified factors can be quantified using software metrics. This paper also investigates tool availability of the identified software metrics. Based on the extensive study, we narrowed down 44 factors that could positively or negatively affect the reusability of software systems. In term of software metrics, we report our findings through five main families of metrics, namely coupling, cohesion, complexity, inheritance, and size. We found that most of the metrics examine reusability at the class-level, and the availability of software tools is limited. Furthermore, not all reusability affecting factors are equally impactful to assess the reusability of software components.While existing studies often discussed the impact of complexity towards software reusability, we found that only a handful of complexity metrics were meant for assessing reusability. We have identified several open challenges and gaps in the area, in particular lack of quantifiable measurement for reusability, limited software tools, and limited metrics that directly measure reusability.
Measuring and estimating the reusability of software components are important steps toward finding reusable candidates. Reuse of software components can aid in the reduction of the development cost of new software or maintenance of existing ones. However, assessing the reusability of software is a challenging task. Even if reusable candidates can be identified, developers will need to decide on the reuse strategies, that is, reuse as it is without any modification, or introduce new changes to the reusable candidate to fulfill requirements in the new environment or releases. Thus, the stability of the reusable candidates plays a pivotal role because it will influence the ease of reuse. Assessing and predicting the stability of potential reusable candidates becomes a significant and important step in software reuse. In this research, we propose to leverage on risk of change as a proxy to measure software stability, where the latter is also a proxy measure for software reuse. Based on experiments conducted on 29 open-source software with different sizes and application types hosted on GitHub, we found that classes with high impact and high risk of change should be avoided from being reused due to their instability, while those classes with low impact and risk of change should be given priority. The proposed work can aid in providing a better understanding of the ease of reuse for software systems and can be used as a tool to assess its overall quality.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.