An Internet worm automatically replicates itself to vulnerable systems and may infect hundreds of thousands of servers across the Internet. It is conceivable that the cyber-terrorists may use a wide-spread worm to cause major disruption to our Internet economy. While much recent research concentrates on propagation models, the defense against worms is largely an open problem. We propose a distributed anti-worm architecture (DAW) that automatically slows down or even halts the worm propagation. New defense techniques are developed based on behavioral difference between normal hosts and worm-infected hosts. Particulary, a worm-infected host has a much higher connection-failure rate when it scans the Internet with randomly selected addresses. This property allows DAW to set the worms apart from the normal hosts. We propose a temporal rate-limit algorithm and a spatial ratelimit algorithm, which makes the speed of worm propagation configurable by the parameters of the defense system. DAW is designed for an Internet service provider to provide the antiworm service to its customers. The effectiveness of the new techniques is evaluated analytically and by simulations.
The spread of a source host is the number of distinct destinations that it has sent packets to during a measurement period. A spread estimator is a software/hardware module on a router that inspects the arrival packets and estimates the spread of each source. It has important applications in detecting port scans and DDoS attacks, measuring the infection rate of a worm, assisting resource allocation in a server farm, determining popular web contents for caching, to name a few. The main technical challenge is to fit a spread estimator in a fast but small memory (such as SRAM) in order to operate it at the line speed in a high-speed network. In this paper, we design a new spread estimator that delivers good performance in tight memory space where all existing estimators no longer work. The new estimator not only achieves space compactness but operates more efficiently than the existing ones. Its accuracy and efficiency come from a new method for data storage, called virtual vectors, which allow us to measure and remove the errors in spread estimation. We perform experiments on real Internet traces to verify the effectiveness of the new estimator.
A wireless sensor network is constrained by computation capability, memory space, communication bandwidth, and above all, energy supply. When a critical event triggers a surge of data generated by the sensors, congestion may occur as data packets converge toward a sink. Congestion causes energy waste, throughput reduction, and information loss. However, the important problem of congestion avoidance in sensor networks is largely open. This paper proposes a congestion-avoidance scheme based on lightweight buffer management. We describe simple yet effective approaches that prevent data packets from overflowing the buffer space of the intermediate sensors. These approaches automatically adapt the sensors' forwarding rates to nearly optimal without causing congestion. We discuss how to implement buffer-based congestion avoidance with different MAC protocols. In particular, for CSMA with implicit ACK, our 1=k-buffer solution prevents hidden terminals from causing congestion. We demonstrate how to maintain nearoptimal throughput with a small buffer at each sensor and how to achieve congestion-free load balancing when there are multiple routing paths toward multiple sinks.
Abstract-Maximizing the operational lifetime of a sensor network is a critical problem in practice. Many prior works define the network's lifetime as the time before the first sensor in the network runs out of energy. However, when one sensor dies, the rest of the network can still work, as long as useful data generated by other sensors can reach the sink. More appropriately, we should maximize the lifetime vector of the network, consisting of the lifetimes of all sensors, sorted in ascending order. For this problem, there exists only a centralized algorithm that solves a series of linear programming problems with high-order complexities. This paper proposes a fully distributed progressive algorithm which iteratively produces a series of lifetime vectors, each better than the previous one. Instead of giving the optimal result in one shot after lengthy computation, the proposed distributed algorithm has a result at any time, and the more time spent gives the better result. We show that when the algorithm stabilizes, its result produces the maximum lifetime vector. Furthermore, simulations demonstrate that the algorithm is able to converge rapidly towards the maximum lifetime vector with low overhead.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.