Data mining environments produces large when used on smaller data sets, the demand for accurate Volume of data. The large amount of knowledge contains can be utilized to improve decision-making process of an models often requires the use of large data sets that allow organization. Large amount of available data when used for decision tree construction builds large sized trees that are incomprehensible to human experts. The learning parameter estimates. Random data reduction technique can process on this high volume data becomes very slow, as it be one of the solutions to this problem [6] Tim Oates, David has to be done serially on available large datasets. Our ultimate goal is to build smaller trees with equally Jensen [7] proved that removing randomly selected training accurate solutions with randomly selected sampled data.We exprimened on echniues baed onthe ida of nstances often results in smaller tress that are JUSt as accurateWe experimented on techniques based on the idea of incremental random sampling combined with genetic as those built on all available training instances. Decision tree algorithms that uses global search techniques to evolve decision Trees to obtain compact representation of large data set. Experiments performed on some data sets data sets, by selecting the most significant variable to proved that the proposed random sampling procedures . . with genetic algorithms to build decision Trees gives relatively smaller trees as compared to other methods but identified construction of decision trees as NP-Complete equally accurate solution as other methods. The method incorpoaccuratesoptim tion wih other CethompreThensiity problem that leads us to use genetic algorithms that provide incorporates optimization with the Comprehensibility and scalability. We tried to explore the method using that global search through space in many directions we can avoid problems like slow execution, overloading simultaneously of memory and processor with very large database can be avoided using the technique. 1.1 The Decision Tree Construction.
ABSRACTMissing data is one of the major issues in data mining and pattern recognition. The knowledge contains in attributes with missing data values are important in improving decisionmaking process of an organization. The learning process on each instance is necessary as it may contain some exceptional knowledge. There are various methods to handle missing data in decision tree learning. The proposed imputation algorithm is based on the genetic algorithm that uses domain values for that attribute as pool of solutions. Survival of the fittest is the basis of genetic algorithm. The fitness function is classification accuracy of an instance with imputed value on the decision tree. The global search technique used in genetic algorithm is expected to help to get optimal solution.
Nowadays internet has become more popular to each and every one. It is very sensitive to nodes or links failure due to many known or unknown issues in the network connectivity. Routing is the important concept in wired and wireless network for packet transmission. During the packet transmission many times some of the problems occur, due to this packets are being lost or nodes not able to transmit the packets to the specific destination. This paper discusses various issues and approaches related to the routing mechanism. In this paper, we present a review and comparison of different routing algorithms and protocols proposed recently in order to address various issues. The main purpose of this study is to address issues for packet forwarding like network control management, load balancing, congestion control, convergence time and instability. We also focus on the impact of these issues on packet forwarding.
Border Gateway Protocol (BGP), a path vector routing protocol, is a widespread exterior gateway protocol (EGP) in the internet. Extensive deployment of the new technologies in internet, protocols need to have continuous improvements in its behavior and operations. New routing technologies conserve a top level of service availability. Hence, due to topological changes, BGP needs to achieve a fast network convergence. Now a days size of the network growing very rapidly. To maintain the high scalability in the network BGP needs to avoid instability. The instability and failures may cause the network into an unstable state, which significantly increases the network convergence time. This paper summarizes the various approaches like BGP policies, instability, and fault detection etc. to improve the convergence time of BGP.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.