Urban planning applications (energy audits, investment, etc.) require an understanding of built infrastructure and its environment, i.e., both low-level, physical features (amount of vegetation, building area and geometry etc.), as well as higher-level concepts such as land use classes (which encode expert understanding of socioeconomic end uses).is kind of data is expensive and laborintensive to obtain, which limits its availability (particularly in developing countries). We analyze pa erns in land use in urban neighborhoods using large-scale satellite imagery data (which is available worldwide from third-party providers) and state-of-theart computer vision techniques based on deep convolutional neural networks. For supervision, given the limited availability of standard benchmarks for remote-sensing data, we obtain ground truth land use class labels carefully sampled from open-source surveys, in particular the Urban Atlas land classi cation dataset of 20 land use classes across 300 European cities. We use this data to train and compare deep architectures which have recently shown good performance on standard computer vision tasks (image classi cation and segmentation), including on geospatial data. Furthermore, we show that the deep representations extracted from satellite imagery of urban environments can be used to compare neighborhoods across several cities. We make our dataset available for other machine learning researchers to use for remote-sensing applications.
TCP congestion-control is fairly inefficient in achieving high throughput in high-speed and dynamic-bandwidth environments. The main culprit is the slow bandwidth-search process used by TCP, which may take up to several thousands of round-trip times (RTTs) in searching for and acquiring the end-to-end spare bandwidth. Even the recently-proposed "highspeed" transport protocols may take hundreds of RTTs for this.In this paper, we design a new approach for congestion-control that allows TCP connections to boldly search for, and adapt to, the available bandwidth within a single RTT. Our approach relies on carefully orchestrated packet sending times and estimates the available bandwidth based on the delays experienced by these. We instantiate our new protocol, referred to as RAPID, using mechanisms that promote efficiency, queue-friendliness, and fairness. Our experimental evaluations on gigabit networks indicate that RAPID: (i) converges to an updated value of bandwidth within 1-4 RTTs; (ii) helps maintain fairly small queues; (iii) has negligible impact on regular TCP traffic; and (iv) exhibits excellent intra-protocol fairness among co-existing RAPID transfers. The rate-based design allows RAPID to be truly RTT-fair.
This paper develops and evaluates new share-based scheduling algorithms for differentiated service quality in network services, such as network storage servers. This form of resource control makes it possible to share a server among multiple request flows with probabilistic assurance that each flow receives a specified minimum share of a server's capacity to serve requests. This assurance is important for safe outsourcing of services to shared utilities such as Storage Service Providers.Our approach interposes share-based request dispatching on the network path between the server and its clients. Two new scheduling algorithms are designed to run within an intermediary (e.g., a network switch), where they enforce fair sharing by throttling request flows and reordering requests; these algorithms are adaptations of Start-time Fair Queuing (SFQ) for servers with a configurable degree of internal concurrency. A third algorithm, Request Windows (RW), bounds the outstanding requests for each flow independently; it is amenable to a decentralized implementation, but may restrict concurrency under light load. The analysis and experimental results show that these new algorithms can enforce shares effectively when the shares are not saturated, and that they provide acceptable performance isolation under saturation. Although the evaluation uses a storage service as an example, interposed request scheduling is non-intrusive and views the server as a black box, so it is useful for complex services with no internal support for differentiated service quality.
Website fingerprinting based on TCP/IP headers is of significant relevance to several Internet entities. Prior work has focused only on a limited set of features, and does not help understand the extents of fingerprint-ability. We address this by conducting an exhaustive feature analysis within eight different communication scenarios. Our analysis helps reveal several previously-unknown features in several scenarios, that can be used to fingerprint websites with much higher accuracy than previously demonstrated. This work helps the community better understand the extents of learnability (and vulnerability) from TCP/IP headers.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.