In recent days, the need to provide reliable data transmission over Internet traffics or cellular mobile systems becomes very important. Transmission Control Protocol (TCP) represents the prevailing protocol that provide reliability to data transferring in all end-to-end data stream services on the Internet and many of new networks. TCP congestion control has become the key factor manipulating the behavior and performance of the networks. TCP sender can regulates the size of the congestion window (CWND) using the congestion control mechanism and TCP dynamically adjust the window size depending on the packets acknowledgment (ACK) or by indicates the packets losses when occur. TCP congestion control includes two main phases, slow-start and congestion avoidance and these two phases even work separately, but the combination of them controls CWND and the packet injection to the network pipe. Congestion avoidance and slow-start are liberated mechanisms and using unlike objectives, but if the congestion happens, they are executed together. This article provides an efficient and reliable congestion avoidance mechanism to enhancing the TCP performance in large-bandwidth low-latency networks. The proposed mechanism also includes a facility to send multiple flows over same connection with a novel technique to estimate the number of available flows dynamically, where the all experiments to approving the proposed techniques are performed over the network simulation NS-2.
This article focuses on the quality of data mining algorithms in terms of the accuracy ratio and time consumption. So, in order to figure out the best algorithm among the classification and clustering algorithms, the WEKA program will be testing all algorithms using a real dataset from the size effect on defect proneness for open source software. The Mozilla product is adopted as an example of open source software. The dataset that is used in this paper represents the output of the study of the size effect on defect proneness in the open source software. The study of Mozilla product shows a significant relationship between the size of software and the number of defect proneness in software. The Mozilla product study produced a dataset to be as inputs of the WEKA program in order to compare the data mining tools (algorithms). We use the Naive Bayes, Decision Trees J48, Expectation-maximization for classifying and K-Star and Simple KMeans for clustering methods. The findings demonstrate the difference between the algorithms according to the accuracy, and the time consuming to reach the result in each algorithm. Furthermore, the effect of the software size is significant on defect proneness. Finally, the experiments are conducted in WEKA with the aim of this research is finding out the best algorithm in terms of accuracy and timeconsuming. At the end, the paper will be figuring out the best algorithm in order to choose and depending on it in the tests of classification and clustering.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.