The authors propose a state-based risk assessment methodology at the analysis and design stage of Software Development Life Cycle. First, a method is proposed to estimate the risk for various states of a component within a scenario and then, the risk for the whole scenario is estimated. The key data needed for risk assessment are complexity and severity. An Inter-Component State-Dependence Graph is introduced to estimate the complexity for a state of a component within a system. The severity for a component within a scenario is decided based on three hazard techniques: Functional Failure Analysis, Software Failure Mode and Effect Analysis and Software Fault Tree Analysis. The risk for a scenario is estimated based on the risk of interacting components in various states within the scenario and State COllaboration TEst Model of the scenario. Finally, the system risk is estimated based on two inputs: scenarios risks and Interaction Overview Diagram of the system. The methodology is applied on a Library Management System case study. An experimental comparative analysis is performed and observed that the testing team guided by our state-based risk assessment approach achieves high test efficiency compared with it with an existing component-based risk assessment approach.
Even after thorough testing, a few bugs still remain in a program with moderate complexity. These residual bugs are randomly distributed throughout the code. We have noticed that bugs in some parts of a program cause frequent and severe failures compared to those in other parts. Then, it is necessary to take a decision about what to test more and what to test less within the testing budget. It is possible to prioritize the methods and classes of an object-oriented program according to their potential to cause failures. For this, we propose a program metric called influence metric to find the influence of a program element on the source code. First, we represent the source code into an intermediate graph called extended system dependence graph. Then, forward slicing is applied on a node of the graph to get the influence of that node. The influence metric for a method m in a program shows the number of statements of the program which directly or indirectly use the result produced by method m. We compute the influence metric for a class c based on the influence metric of all its methods. As influence metric is computed statically, it does not show the expected behavior of a class at run time. It is already known that faults in highly executed parts tend to more failures. Therefore, we have considered operational profile to find the average execution time of a class in a system. Then, classes are prioritized in the source code based on influence metric and average execution time. The priority of an element indicates the potential of the element to cause failures. Once all program elements have been prioritized, the testing effort can be apportioned so that the elements causing frequent failures will be tested thoroughly. We have conducted experiments for two well-known case studies -Library Management System and Trading Automation System -and successfully identified critical elements in the source code of each case study. We have also conducted experiments to compare our scheme with a related scheme. The experimental studies justify that our approach is more accurate than the existing ones in exposing critical elements at the implementation level.
Cloud computing is gaining more popularity due to its advantages over conventional computing. It offers utility based services to subscribers on demand basis. Cloud hosts a variety of web applications and provides services on the pay-per-use basis. As the users are increasing in the cloud system, the load balancing has become a critical issue in cloud computing. Scheduling workloads in the cloud environment among various nodes are essential to achieving a better quality of service. Hence it is a prominent area of research as well as challenging to allocate the resources with changeable capacities and functionality. In this paper, a metaheuristic load balancing algorithm using Particle Swarm Optimization (MPSO) has been proposed by utilizing the benefits of particle swarm optimization (PSO) algorithm. Proposed approach aims to minimize the task overhead and maximize the resource utilization. Performance comparisons are made with Genetic Algorithm (GA) and other popular algorithms on different measures like makespan calculation and resource utilization. Different cloud configurations are considered with varying Virtual Machines (VMs) and Cloudlets to analyze the efficiency of proposed algorithm. The proposed approach performs better than existing schemes.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.