Abstract:We consider the problem of estimating the measure of subsets in very large networks. A prime tool for this purpose is the Markov Chain Monte Carlo (MCMC) algorithm. This algorithm, while extremely useful in many cases, still often suffers from the drawback of very slow convergence. We show that in a special, but important case, it is possible to obtain significantly better bounds on the convergence rate. This special case is when the huge state space can be aggregated into a smaller number of clusters, in which the states behave approximately the same way (but their behavior still may not be identical). A Markov chain with this structure is called quasi-lumpable. This property allows the aggregation of states (nodes) into clusters. Our main contribution is a rigorously proved bound on the rate at which the aggregated state distribution approaches its limit in quasi-lumpable Markov chains. We also demonstrate numerically that in certain cases this can indeed lead to a significantly accelerated way of estimating the measure of subsets. The result can be a useful tool in the analysis of complex networks, whenever they have a clustering that aggregates nodes with similar (but not necessarily identical) behavior.
We propose a network topology design approach that targets the reduction of structural congestion in a directed acyclic network. What we mean by structural congestion is that a node has much higher in-degree than out-degree in a directed network. We approach the issue using a network design game model. In this model we consider multiple sources and one destination. Each node is willing to connect to other nodes but it should pay the price of whole paths it uses to send traffic to the destination. The model yields a weight for each link. We show that if these weights are used to compute shortest paths, then a network topology is obtained with a low level of structural congestion.The proposed method has two phases. In Phase I, we solve a linear optimization problem in order to find the optimum link weights. In Phase II, each node optimizes its own individual objective function, which is based on the weights computed in Phase I. We show that there exists a Nash Equilibrium which is also the global optimum. In order to measure the penalty incurred by the selfish behavior of nodes, we use the concept called price of anarchy. Our results show that the price of anarchy is zero.
Traffic engineering helps to use network resources more efficiently. Network operators use TE to obtain different objectives such as load balancing, congestion avoidance and average delay reduction. Plane IP routing protocols such as OSPF, a popular intradomain routing protocol, are believed to be insufficient for TE. OSPF is based on the shortest path algorithm in which link weights are usually static value without considering network load. They can be set using the inverse proportional bandwidth capacity or certain value. However, Optimization theory helps network researchers and operators to analyze the network behavior more precisely. It is not a practical approach can be implemented in traditional protocol .This paper proposes that to address the feasibility requirements, a weight set can be extracted from optimization problem use as a link metric in OSPF. We show the routes that selected in OSPF with these metric distribute the traffic more close to optimal situation than routes from OSPF with default metric
The rapidly emerging area of Social Network Analysis is typically based on graph models. They include directed/undirected graphs, as well as a multitude of random graph representations that reflect the inherent randomness of social networks. A large number of parameters and metrics are derived from these graphs. Overall, this gives rise to two fundamental research/development directions: (1) advancements in models and algorithms, and (2) implementing the algorithms for huge real-life systems. The model and algorithm development part deals with finding the right graph models for various applications, along with algorithms to treat the associated tasks, as well as computing the appropriate parameters and metrics. In this chapter we would like to focus on the second area: on implementing the algorithms for very large graphs. The approach is based on the Spark framework and the GraphX API which runs on top of the Hadoop distributed file system.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.