Defect prediction at early stages of software development life cycle is a crucial activity of quality assurance process and has been broadly studied in the last two decades. The early prediction of defective modules in developing software can help the development team to utilize the available resources efficiently and effectively to deliver high quality software product in limited time. Until now, many researchers have developed defect prediction models by using machine learning and statistical techniques. Machine learning approach is an effective way to identify the defective modules, which works by extracting the hidden patterns among software attributes. In this study, several machine learning classification techniques are used to predict the software defects in twelve widely used NASA datasets. The classification techniques include: Naïve Bayes (NB), Multi-Layer Perceptron (MLP). Radial Basis Function (RBF), Support Vector Machine (SVM), K Nearest Neighbor (KNN), kStar (K*), One Rule (OneR), PART, Decision Tree (DT), and Random Forest (RF). Performance of used classification techniques is evaluated by using various measures such as: Precision, Recall, F-Measure, Accuracy, MCC, and ROC Area. The detailed results in this research can be used as a baseline for other researches so that any claim regarding the improvement in prediction through any new technique, model or framework can be compared and verified.
Testing is considered as one of the expensive activities in software development process. Fixing the defects during testing process can increase the cost as well as the completion time of the project. Cost of testing process can be reduced by identifying the defective modules during the development (before testing) stage. This process is known as "Software Defect Prediction", which has been widely focused by many researchers in the last two decades. This research proposes a classification framework for the prediction of defective modules using variant based ensemble learning and feature selection techniques. Variant selection activity identifies the best optimized versions of classification techniques so that their ensemble can achieve high performance whereas feature selection is performed to get rid of such features which do not participate in classification and become the cause of lower performance. The proposed framework is implemented on four cleaned NASA datasets from MDP repository and evaluated by using three performance measures, including: F-measure, Accuracy, and MCC. According to results, the proposed framework outperformed 10 widely used supervised classification techniques, including:
Purpose
Self-localization of an underwater robot using global positioning sensor and other radio positioning systems is not possible, as an alternative onboard sensor-based self-location estimation provides another possible solution. However, the dynamic and unstructured nature of the sea environment and highly noise effected sensory information makes the underwater robot self-localization a challenging research topic. The state-of-art multi-sensor fusion algorithms are deficient in dealing of multi-sensor data, e.g. Kalman filter cannot deal with non-Gaussian noise, while parametric filter such as Monte Carlo localization has high computational cost. An optimal fusion policy with low computational cost is an important research question for underwater robot localization.
Design/methodology/approach
In this paper, the authors proposed a novel predictive coding-biased competition/divisive input modulation (PC/BC-DIM) neural network-based multi-sensor fusion approach, which has the capability to fuse and approximate noisy sensory information in an optimal way.
Findings
Results of low mean localization error (i.e. 1.2704 m) and computation cost (i.e. 2.2 ms) show that the proposed method performs better than existing previous techniques in such dynamic and unstructured environments.
Originality/value
To the best of the authors’ knowledge, this work provides a novel multisensory fusion approach to overcome the existing problems of non-Gaussian noise removal, higher self-localization estimation accuracy and reduced computational cost.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.