Abstract-Voice communications such as telephony are delay sensitive. Existing voice-over-IP (VoIP) applications transmit voice data in packets of very small size to minimize packetization delay, causing very inefficient use of network bandwidth. This paper proposes a multiplexing scheme for improving the bandwidth efficiency of existing VoIP applications. By installing a multiplexer in an H.323 proxy, voice packets from multiple sources are combined into one IP packet for transmission. A demultiplexer at the receiver-end proxy restores the original voice packets before delivering them to the end-user applications. Results show that the multiplexing scheme can increase bandwidth efficiency by as much as 300%. The multiplexing scheme is fully compatible with existing H.323-compliant VoIP applications and can be readily deployed.Index Terms-Internet telephony, multiplexing, voice over IP.
In conventional video-on-demand (VoD) systems, compressed digital video streams are stored in a video server for delivery to receiver stations over a communication network. This article introduces a framework for the design of parallel video server architectures and addresses three central architectural issues: video distribution architectures, server striping policies, and video delivery protocols. A mong the many different types of media available for retrieval, retrieving full-motion, high-quality video in real time-video-on-demand (VoD)-poses the greatest challenges. Digital video not only requires significantly more storage space and transmission bandwidth than traditional data services, it must be delivered in time for continuous playback. Many studies conducted in the last decade have addressed these issues. One common architecture shared by most VoD systems is a single-server model. The video servers can range from a standard PC for small-scale systems to massively parallel supercomputers with thousands of processors for large-scale systems. However, this single-server approach has its limitations. Scalability The first of these limitations is capacity. When demand exceeds the server's capacity, one may need to replicate data to a new server. This doubles the system's storage requirements. To reduce storage overhead in replication and to balance the load among replicated servers, a number of studies have proposed replication algorithms based on video popularity as well as server heterogeneity, for example different storage and bandwidth. 1,2 A second approach partitions the video titles into disjointed subsets and stores each subset on a separate video server. Although this approach does not incur extra storage, it suffers from another problem-load balancing. Studies have shown that video retrieval is highly skewed in many applications because some videos are more popular than others. 3 Furthermore, the skewness
AbstractÐIn conventional video-on-demand systems, video data are stored in a video server for delivery to multiple receivers over a communications network. The video server's hardware limits the maximum storage capacity as well as the maximum number of video sessions that can concurrently be delivered. Clearly, these limits will eventually be exceeded by the growing need for better video quality and larger user population. This paper studies a parallel video server architecture that exploits server parallelism to achieve incremental scalability. First, unlike data partition and replication, the architecture employs data striping at the server level to achieve fine-grain load balancing across multiple servers. Second, a client-pull service model is employed to eliminate the need for interserver synchronization. Third, an admission-scheduling algorithm is proposed to further control the instantaneous load at each server so that linear scalability can be achieved. This paper analyzes the performance of the architecture by deriving bounds for server service delay, client buffer requirement, prefetch delay, and scheduling delay. These performance metrics and design tradeoffs are further evaluated using numerical examples. Our results show that the proposed parallel video server architecture can be linearly scaled up to more concurrent users simply by adding more servers and redistributing the video data among the servers.
Abstract-Current video-on-demand (VoD) systems can be classified into two categories: 1) true-VoD (TVoD) and 2) near-VoD (NVoD). TVoD systems allocate a dedicated channel for every user to achieve short response times so that the user can select what video to play, when to play it, and perform interactive VCR-like controls at will. By contrast, NVoD systems transmit videos repeatedly over multiple broadcast or multicast channels to enable multiple users to share a single video channel so that system cost can be substantially reduced. The tradeoffs are limited video selections, fixed playback schedule, and limited or no interactive control. TVoD systems can be considered as one extreme where service quality is maximized, while NVoD systems can be considered as the other extreme where system cost is minimized. This paper proposes a novel architecture called Unified VoD (UVoD) that can be configured to achieve cost-performance tradeoff anywhere between the two extremes (i.e., TVoD and NVoD). Assuming that a video client can concurrently receive two video channels and has local buffers for caching a portion of the video data, the proposed UVoD architecture can achieve significant performance gains (e.g., 400% more capacity for a 500-channel system) over TVoD under the same latency constraint. This paper presents the UVoD architecture, establishes a performance model, and analyzes UVoD's performance via numerical and simulation results.Index Terms-Near-video-on-demand (NVoD), performance analysis, true-video-on-demand (TVoD), unified architecture, UVoD, video-on-demand (VoD).
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.