Video-based sensor networks can provide important visual information in a number of applications including: environmental monitoring, health care, emergency response, and video security. This article describes the Panoptes video-based sensor networking architecture, including its design, implementation, and performance. We describe two video sensor platforms that can deliver high-quality video over 802.11 networks with a power requirement less than 5 watts. In addition, we describe the streaming and prioritization mechanisms that we have designed to allow it to survive long-periods of disconnected operation. Finally, we describe a sample application and bitmapping algorithm that we have implemented to show the usefulness of our platform. Our experiments include an in-depth analysis of the bottlenecks within the system as well as power measurements for the various components of the system.
Abstract-With the current default settings of the OSPF parameters, the network takes several tens of seconds before recovering from a failure. The main component in this delay is the time required to detect the failure using Hello protocol. Failure detection time can be speeded up by reducing the value of HelloInterval. However, too small a value of HelloInterval will result in an increased chance of network congestion causing loss of several consecutive Hellos, thus leading to false breakdown of adjacency between routers. Such false alarms not only disrupt network traffic by causing unnecessary routing changes but also increase the processing load on the routers which may potentially lead to routing instability. In this paper, we investigate the following question -What is the optimal value for the HelloInterval that will lead to fast failure detection in the network while keeping the false alarm occurrence within acceptable limits? We examine the impact of both network congestion and the network topology on the optimal HelloInterval value. Additionally, we investigate the effectiveness of faster failure detection in achieving faster failure recovery in OSPF networks. (Abstract)
The transportation of compressed video data without loss of picture quality requires the network to support large fluctuations in bandwidth requirements. Fully utilizing a client-side buffer for smoothing bandwidth requirements can limit the fluctuations in bandwidth required from the underlying network. This paper shows that for a fixed size buffer constraint, the critical bandwidth allocation technique results in the minimum number of bandwidth increases necessary for stored video playback. In addition, this paper introduces an optimal bandwidth allocation algorithm which minimizes the number of bandwidth changes necessary for the playback of stored video and achieves the maximum effectiveness from client-side buffers. For bandwidth allocation plans that allocate bandwidth for the length of a video, the optimal bandwidth allocation algorithm also minimizes the number of bandwidth increases as well as the total increase changes in bandwidth. A comparison between the optimal bandwidth allocation algorithm and other critical bandwidth based algorithms using several full-length movie videos is also presented.Keywords: video-on-demand, bandwidth allocation, MPEG, video compression, smoothing IntroductionVideo applications, such as video-on-demand services, rely on both high-speed networking and data compression. Data compression can introduce burstiness into video data streams which can complicate the problem of network resource management. For live-video applications, the problem of video delivery is constrained by the requirement that decisions must be made on-line and that the delay between sender and receiver must be minimized. As a result, live video applications may have to settle for weaker guarantees of service or for some degradation in quality of service. Work on problems raised by the requirements of live video includes work on statistical multiplexing [2,7], smoothing in exchange for delay [4], jitter control [9,11], and adjusting the quality of service to fit the resources available [6]. Stored video applications, on the other hand, can take a more flexible approach to the latency of data delivery. In particular, they can make use of buffering to smooth the burstiness introduced by data compression. Because the entire video stream is known a priori, it is possible to calculate a complete plan for the delivery of the video data that avoids both the loss of picture quality and the loss of network bandwidth due to overstatement of bandwidth requirements.
The transfer of prerecorded, compressed variable-bit-rate video requires multimedia services to support large uctuations in bandwidth requirements on multiple time scales. Bandwidth smoothing techniques can reduce the burstiness of a variable-bit-rate stream by transmitting data at a series of xed rates, simplifying the allocation of resources in video servers and the communication network. This paper compares the transmission schedules generated by the various smoothing algorithms, based on a collection of metrics that relate directly to the server, network, and client resources necessary for the transmission, transport, and playback of prerecorded video. Using MPEG-1 and MJPEG video data and a range of client bu er sizes, we investigate the interplay between the performance metrics and the smoothing algorithms. The results highlight the unique strengths and weaknesses of each bandwidth smoothing algorithm, as well as the characteristics of a diverse set of video clips.
Abstract-When computer intrusions occur, one of the most costly, time-consuming, and human-intensive tasks is the analysis and recovery of the compromised system. At a time when the cost of human resources dominates the cost of CPU, network, and storage resources, we argue that computing systems should, in fact, be built with automated analysis and recovery as a primary goal. Towards this end, we describe the design, implementation, and evaluation of Forensix: a robust, high-precision reconstruction and analysis system for supporting the computer equivalent of "TiVo". The Forensix system records all activity of a target computer and allows for efficient, automated reconstruction of the activity when needed by investigators. Such a system could eventually be used by law enforcement officials to provide evidence in criminal cases as well as by companies to prove or disprove alleged hacking activity.Forensix uses three key mechanisms to improve the accuracy and reduce the human overhead of performing forensic analysis. First it performs comprehensive monitoring of the execution of a target system at the kernel event level, giving a high-resolution, application-independent view of all activity. Second, it streams the kernel event information, in realtime, to append-only storage on a separate, hardened, logging machine, making the system resilient to a wide variety of attacks. Third, it uses database technology to support high-level querying of the archived log, greatly reducing the human cost of performing forensic analysis. Forensix is built on top of Linux and is freely available [1].
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with đź’™ for researchers
Part of the Research Solutions Family.