Quantitative prediction of quality properties (i.e. extrafunctional properties such as performance, reliability, and cost) of software architectures during design supports a systematic software engineering approach. Designing architectures that exhibit a good trade-off between multiple quality criteria is hard, because even after a functional design has been created, many remaining degrees of freedom in the software architecture span a large, discontinuous design space. In current practice, software architects try to find solutions manually, which is time-consuming, can be errorprone and can lead to suboptimal designs. We propose an automated approach to search the design space for good solutions. Starting with a given initial architectural model, the approach iteratively modifies and evaluates architectural models. Our approach applies a multi-criteria genetic algorithm to software architectures modelled with the Palladio Component Model. It supports quantitative performance, reliability, and cost prediction and can be extended to other quantitative quality criteria of software architectures. We validate the applicability of our approach by applying it to an architecture model of a component-based business information system and analyse its quality criteria trade-offs by automatically investigating more than 1200 alternative design candidates.
Abstract. Design decisions for complex, component-based systems impact multiple quality of service (QoS) properties. Often, means to improve one quality property deteriorate another one. In this scenario, selecting a good solution with respect to a single quality attribute can lead to unacceptable results with respect to the other quality attributes. A promising way to deal with this problem is to exploit multi-objective optimization where the objectives represent different quality attributes. The aim of these techniques is to devise a set of solutions, each of which assures a trade-off between the conflicting qualities. To automate this task, this paper proposes a combined use of analytical optimization techniques and evolutionary algorithms to efficiently identify a significant set of design alternatives, from which an architecture that best fits the different quality objectives can be selected. The proposed approach can lead both to a reduction of development costs and to an improvement of the quality of the final system. We demonstrate the use of this approach on a simple case study.
Model-based performance evaluation methods for software architectures can help architects to assess design alternatives and save costs for late life-cycle performance fixes. A recent trend is component-based performance modelling, which aims at creating reusable performance models; a number of such methods have been proposed during the last decade. Their accuracy and the needed effort for modelling are heavily influenced by human factors, which are so far hardly understood empirically. Do component-based methods allow to make performance predictions with a comparable accuracy while saving effort in a reuse scenario? We examined three monolithic methods (SPE, umlPSI, Capacity Planning (CP)) and one component-based performance evaluation method (PCM) with regard to their accuracy and effort from the viewpoint of method users. We conducted a series of three experiments (with different levels of control) involving 47 computer science students. In the first experiment, we compared the applicability of the monolithic methods in order to choose one of them for comparison. In the second experiment, we compared the accuracy and effort of this monolithic and the component-based method for the model creation case. In the third, we studied the effort reduction from reusing component-based models. Data were collected based on the resulting artefacts, questionnaires and screen recording. They were analysed using hypothesis A. Martens (B) · R. Reussner Empir Software Eng (2011) 16:587-622 testing, linear models, and analysis of variance. For the monolithic methods, we found that using SPE and CP resulted in accurate predictions, while umlPSI produced over-estimates. Comparing the component-based method PCM with SPE, we found that creating reusable models using PCM takes more (but not drastically more) time than using SPE and that participants can create accurate models with both techniques. Finally, we found that reusing PCM models can save time, because effort to reuse can be explained by a model that is independent of the inner complexity of a component. The tasks performed in our experiments reflect only a subset of the actual activities when applying model-based performance evaluation methods in a software development process. Our results indicate that sufficient prediction accuracy can be achieved with both monolithic and component-based methods, and that the higher effort for component-based performance modelling will indeed pay off when the component models incorporate and hide a sufficient amount of complexity.
Abstract. The problem of interpreting the results of software performance analysis is very critical. Software developers expect feedbacks in terms of architectural design alternatives (e.g., split a software component in two components and re-deploy one of them), whereas the results of performance analysis are either pure numbers (e.g. mean values) or functions (e.g. probability distributions). Support to the interpretation of such results that helps to fill the gap between numbers/functions and software alternatives is still lacking. Performance antipatterns can play a key role in the search of performance problems and in the formulation of their solutions. In this paper we tackle the problem of identifying, among a set of detected performance antipatterns, the ones that are the real causes of problems (i.e. the "guilty" ones). To this goal we introduce a process to elaborate the performance analysis results and to score performance requirements, model entities and performance antipatterns. The cross observation of such scores allows to classify the level of guiltiness of each antipattern. An example modeled in Palladio is provided to demonstrate the validity of our approach by comparing the performance improvements obtained after removal of differently scored antipatterns.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.