Accurate Software Effort Estimation is of high importance with regard to Software Project Management. It can be specified as the process for predicting Effort regarding costs, needed for developing software products. A lot of techniques related to software effort estimation were carried out for developing models that are generating optimal estimation accuracy. Swarm intelligence is one such technique. The process-related in selecting the optimum estimation algorithm is expert dependent and complex. The presented study optimizes the estimation using the COCOMO II models by two models: the first model applied the dolphin algorithm, the second model applied suggested hybrid dolphin and bat algorithm (DolBat). By applying the two models on two data set and evaluate with the use of Magnitude of Relative Error(MRE) and Mean Magnitude of Relative Error(MMRE). The results indicate that the dolphin algorithm has better than previous algorithms but the (DolBat) is the best to get the coefficient value of the COCOMO II model.
INDEX TERMSBat algorithm, COCOMO II model, dolphin algorithm, effort estimation, echolocation, NASA project dataset.
In succession, the size and complexity of the program increase and the scope of testing expand. So, to ensure deadline delivery and reduce development testing costs, program testing efficiency must be improved. Therefore, to ensure that the product is delivered on the deadline and the cost of testing development is reduced, the efficiency of the program's testing must be enhanced. In this study, highlighting is placed to generate test suite automatically to reach increase the coverage of paths based on two algorithms Grey Wolf Optimizer algorithm (GWO) and Particle swarm optimization (PSO). The results of the implementation will be compared with each other to demonstrate the performance and efficiency of the proposed methodology and the results show that the PSO algorithm has better than the GWO algorithm.
A fault is an error that has effects on system behaviour. A software metric is a value that represents the degree to which software processes work properly and where faults are more probable to occur. In this research, we study the effects of removing redundancy and log transformation based on threshold values for identifying faults-prone classes of software. The study also contains a comparison of the metric values of an original dataset with those after removing redundancy and log transformation. E-learning and system dataset were taken as case studies. The fault ratio ranged from 1%-31% and 0%-10% for the original dataset and 1%-10% and 0%-4% after removing redundancy and log transformation, respectively. These results impacted directly the number of classes detected, which ranged between 1-20 and 1-7 for the original dataset and 1-7 and 0-3) after removing redundancy and log transformation. The Skewness of the dataset was deceased after applying the proposed model. The classified faulty classes need more attention in the next versions in order to reduce the ratio of faults or to do refactoring to increase the quality and performance of the current version of the software.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.