In the telecom sector, a huge volume of data is being generated on a daily basis due to a vast client base. Decision makers and business analysts emphasized that attaining new customers is costlier than retaining the existing ones. Business analysts and customer relationship management (CRM) analyzers need to know the reasons for churn customers, as well as, behavior patterns from the existing churn customers' data. This paper proposes a churn prediction model that uses classification, as well as, clustering techniques to identify the churn customers and provides the factors behind the churning of customers in the telecom sector. Feature selection is performed by using information gain and correlation attribute ranking filter. The proposed model first classifies churn customers data using classification algorithms, in which the Random Forest (RF) algorithm performed well with 88.63% correctly classified instances. Creating effective retention policies is an essential task of the CRM to prevent churners. After classification, the proposed model segments the churning customer's data by categorizing the churn customers in groups using cosine similarity to provide group-based retention offers. This paper also identified churn factors that are essential in determining the root causes of churn. By knowing the significant churn factors from customers' data, CRM can improve productivity, recommend relevant promotions to the group of likely churn customers based on similar behavior patterns, and excessively improve marketing campaigns of the company. The proposed churn prediction model is evaluated using metrics, such as accuracy, precision, recall, f-measure, and receiving operating characteristics (ROC) area. The results reveal that our proposed churn prediction model produced better churn classification using the RF algorithm and customer profiling using k-means clustering. Furthermore, it also provides factors behind the churning of churn customers through the rules generated by using the attribute-selected classifier algorithm.
To accurately rank various web services can be a very challenging task depending on the evaluation criteria used, however, it can play an important role in performing a better selection of web services afterward. This paper proposes an approach to evaluate trust prediction and confusion matrix to rank web services from throughput and response time. AdaBoostM1 and J48 classifiers are used as binary classifiers on a benchmark web services dataset. The trust score (TS) measuring method is proposed by using the confusion matrix to determine trust scores of all web services. Trust prediction is calculated using 5-Fold, 10-Fold, and 15-Fold cross-validation methods. The reported results showed that the web service 1 (WS1) was most trusted with (48.5294%) TS value, and web service 2 (WS2) was least trusted with (24.0196%) TS value by users. Correct prediction of trusted and untrusted users in web services invocation has improved the overall selection process in a pool of similar web services. Kappa statistics values are used for the evaluation of the proposed approach and for performance comparison of the two above-mentioned classifiers.
Software defined network (SDN) centralized control intelligence and network abstraction aims to facilitate applications, service deployment, programmability, innovation and ease in configuration management of the underlying networks. However, the centralized control intelligence and programmability is primarily a potential target for the evolving cyber threats and attacks to throw the entire network into chaos. The authors propose a control plane-based orchestration for varied sophisticated threats and attacks. The proposed mechanism comprises of a hybrid Cuda-enabled DL-driven architecture that utilizes the predictive power of Long short-term memory (LSTM) and Convolutional Neural Network (CNN) for an efficient and timely detection of multi-vector threats and attacks. A current state of the art dataset CICIDS2017 and standard performance evaluation metrics have been employed to thoroughly evaluate the proposed mechanism. We rigorously compared our proposed technique with our constructed hybrid DL-architectures and current benchmark algorithms. Our analysis shows that the proposed approach outperforms in terms of detection accuracy with a trivial trade-off speed efficiency. We also performed a 10-fold cross validation to explicitly show unbiased results. INDEX TERMS Security, hybrid deep learning model, software defined networks, long short-term memory, convolutional neural network.
The volume of research articles in digital repositories is increasing. This spectacular growth of repositories makes it rather difficult for researchers to obtain related research papers in response to their queries. The problem becomes worse when a researcher with insufficient knowledge of searching research articles uses these repositories. In the traditional recommendation approaches, the results of the query miss many high-quality papers, in the related work section, which are either published recently or have low citation count. To overcome this problem, there needs to be a solution which considers not only structural relationships between the papers but also inspects the quality of authors publishing those articles. Many research paper recommendation approaches have been implemented which includes collaborative filtering-based, content-based, and citation analysis-based techniques. The collaborative filtering-based approaches primarily use paper-citation matrix for recommendations, whereas the content-based approaches only consider the content of the paper. The citation analysis considers the structure of the network and focuses on papers citing or cited by the paper of interest. It is therefore very difficult for a recommender system to recommend high-quality papers without a hybrid approach that incorporates multiple features, such as citation information and author information. The proposed method creates a multilevel citation and relationship network of authors in which the citation network uses the structural relationship between the papers to extract significant papers, and authors' collaboration network finds key authors from those papers. The papers selected by this hybrid approach are then recommended to the user. The results have shown that our proposed method performs exceedingly well as compared with the state-of-the-art existing systems, such as Google scholar and multilevel simultaneous citation network.
Web services have emerged as an accessible technology with the standard 'Extensible Mark Up' (XML) language, which is known as 'Web Services Description Language' WSDL. Web services have become a promising technology to promote the interrelationship between service providers and users. Web services users' trust is measured by quality metrics. Web service quality metrics vary in many benchmark datasets used in the existing studies. The selection of a benchmark dataset is problematic to classify and retest web services. This paper proposes a method to rank web services quality metrics for the selection of benchmark web services datasets. To measure the diversity in quality metrics, factor analysis with Varimax rotation and scree plot is a well-established method. We use factor analysis to determine percentage variance among principal factors of four benchmark datasets. Our results showed that the two-factor solution explained 94.501, 76.524, and 45.009% variances in datasets A, B, and D, respectively. A three-factor solution explained 85.085% variance in dataset C. Reliability, and response time quality metrics were predicted as the most dominating quality metrics that contributed to explain the percentage variance in four datasets. Our proposed web metric ranking (WMR) method resulted in reliability as the topmost web metric with (57.62%) score and latency web metric at the bottom-most with (3.60%) score. The proposed WMR method showed a high (96.17%) ranking precision. Obtained results verified that factor solutions after reducing the dimensions could be generalized and used in the quality improvement of web services. In future works, the authors plan to focus on a dataset with dominating quality metrics to perform regression testing of web services. INDEX TERMS Factor analysis, quality metrics, rotated loading, reliability, response time, regression testing, web services.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.