Vector quantization is a popular data compression technique due to its theoretical advantage over scalar quantization which enables exploitation of the dependencies between neighboring samples. However, the complexity of the encoding process imposes certain limitations on the size of the codebook population and/or the dimensions of the processed blocks. In this paper, we show that this complexity can be conveniently distributed as subcodebooks over general purpose MIMD parallel prccessors, to provide almost linearly scalable throughput and flexible configurability. A particular advantage of this approach is that it makes feasible the use of the higher dimensional image blocks and/or larger codebooks, leading to improved coding performance with no penalty in execution speed compared with the original sequential implementation.
I INTRODUCTIONImage compression is an essential part of any imaging system. It aims to reduce the large amount of data required for storage and/or transmission of digital images and video. This is an important issue in applications such as high-definition TV, videoconferencing and video telephony [lI1 [Z].Vector quantization (VQ) h a s been extensively investigated for audio, speech, image and video coding applications [3], 241, [5]. The main advantage of using VQ-based compression algorithms is the simplicity of the decoding process. It typically involves a simple look-up table operation which makes VQ particularly useful for applications with single-encoder and multi-decoder systems. On the other hand, the major limitation of VQ is the high computational complexity required for the selection of the bestmatched codevector available in the VQ codebook. 'e-mail: cuhaa@gubim.bim.gantep.edu.tr, Fax: 90-342-3601100However, although the encoding task in VQ is computationally expensive, it is well suited for parallel processing because of the repetitive nature of the t a s k and the regularity of the image data. Conwage data volume and the computational complexity of the encoding task in VQ, the processing speed to meet this computational demand needs to be very large. Parallel processing has been a natural framework fdr fast image prccessing applications [6], [7]. In this paper, we present the application of a scalable parallel approach to the implementation of the VQ encoding process.This approach is based upon the pipeline processor farm (PPF) design methodology which has been developed for embedded image processing and computer vision applications [a], [9], [lo].
I1 VECTOR QUANTIZATIONVector quantization is a generalization of scalar quantization, where a group of samples are jointly quantized instead of individual quantization of each sample [5]. This offers the advantage that dependencies between neighboring data can be directly exploited. In general, a vector quantizer VQ of dimension k and size N can be defined as the mapping of the vtxtors or points of the k-dimensional Euclidean space Rk into a finite subset Y of N output points.That is, where Y = {yl E Rk; a = 1 2 . . . N ) is the set o...