Abstract-The pervasive deployment of 802.11 in modern enterprise buildings has long made it an attractive technology for constructing indoor location services. To this end, a broad range of algorithms have been proposed to accurately estimate location from 802.11 signal strength measurements, some without requiring manual calibration for each physical location. Prior work suggests that many of these protocols can be highly effectivereporting median errors of under 2 meters in some instances. However, there are few studies validating these claims at scale, nor comparing the algorithms in a uniform, realistic environment. Our work provides precisely this kind of empirical evaluation in a realistic office building environment. Surprisingly, we find that median errors in our environment are consistently greater than 5 meters and, counter-intuitively, that simpler algorithms frequently outperform their more sophisticated counterparts. In analyzing our results, we argue that unrealistic assumptions about access point densities and underlying variability in the indoor environment may preclude highly accurate location estimates based on 802.11 signal strength.
Of the major factors affecting end-to-end service availability, network component failure is perhaps the least well understood. How often do failures occur, how long do they last, what are their causes, and how do they impact customers? Traditionally, answering questions such as these has required dedicated (and often expensive) instrumentation broadly deployed across a network.We propose an alternative approach: opportunistically mining "low-quality" data sources that are already available in modern network environments. We describe a methodology for recreating a succinct history of failure events in an IP network using a combination of structured data (router configurations and syslogs) and semi-structured data (email logs). Using this technique we analyze over five years of failure events in a large regional network consisting of over 200 routers; to our knowledge, this is the largest study of its kind.
Accurate reporting and analysis of network failures has historically required instrumentation (e.g., dedicated tracing of routing protocol state) that is rarely available in practice. In previous work, our group has proposed that a combination of common data sources could be substituted instead. In particular, by opportunistically stitching together data from router configuration logs and syslog messages, we demonstrated that a granular picture of network failures could be resolved and verified with human trouble tickets. In this paper, we more fully evaluate the fidelity of this approach, by comparing with high-quality "ground truth" data derived from an analysis of contemporaneous IS-IS routing protocol messages. We identify areas of agreement and disparity between these data sources, as well as potential ways to correct disparities when possible.
Driven by the soaring demands for always-on and fast-response online services, modern datacenter networks have recently undergone tremendous growth. These networks often rely on commodity hardware to reach immense scale while keeping capital expenses under check. The downside is that commodity devices are prone to failures, raising a formidable challenge for network operators to promptly handle these failures with minimal disruptions to the hosted services. Recent research efforts have focused on automatic failure localization. Yet, resolving failures still requires significant human interventions, resulting in prolonged failure recovery time. Unlike previous work, NetPilot aims to quickly mitigate rather than resolve failures. NetPilot mitigates failures in much the same way operators do -- by deactivating or restarting suspected offending components. NetPilot circumvents the need for knowing the exact root cause of a failure by taking an intelligent trial-and-error approach. The core of NetPilot is comprised of an Impact Estimator that helps guard against overly disruptive mitigation actions and a failure-specific mitigation planner that minimizes the number of trials. We demonstrate that NetPilot can effectively mitigate several types of critical failures commonly encountered in production datacenter networks.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.