We propose a new scheme of designing a vector quantizer for image compression. First, a set of codevectors is generated using the self-organizing feature map algorithm. Then, the set of blocks associated with each code vector is modeled by a cubic surface for better perceptual fidelity of the reconstructed images. Mean-removed vectors from a set of training images is used for the construction of a generic codebook. Further, Huffman coding of the indices generated by the encoder and the difference-coded mean values of the blocks are used to achieve better compression ratio. We proposed two indices for quantitative assessment of the psychovisual quality (blocking effect) of the reconstructed image. Our experiments on several training and test images demonstrate that the proposed scheme can produce reconstructed images of good quality while achieving compression at low bit rates. Index Terms-Cubic surface fitting, generic codebook, image compression, self-organizing feature map, vector quantization.
We propose two new comprehensive schemes for designing prototype-based classifiers. The scheme addresses all major issues (number of prototypes, generation of prototypes, and utilization of the prototypes) involved in the design of a prototype-based classifier. First we use Kohonen's self-organizing feature map (SOFM) algorithm to produce a minimum number (equal to the number of classes) of initial prototypes. Then we use a dynamic prototype generation and tuning algorithm (DYNAGEN) involving merging, splitting, deleting, and retraining of the prototypes to generate an adequate number of useful prototypes. These prototypes are used to design a "1 nearest multiple prototype (1-NMP)" classifier. Though the classifier performs quite well, it cannot reasonably deal with large variation of variance among the data from different classes. To overcome this deficiency we design a "1 most similar prototype (1-MSP)" classifier. We use the prototypes generated by the SOFM-based DYNAGEN algorithm and associate with each of them a zone of influence. A norm (Euclidean)-induced similarity measure is used for this. The prototypes and their zones of influence are fine-tuned by minimizing an error function. Both classifiers are trained and tested using several data sets, and a consistent improvement in performance of the latter over the former has been observed. We also compared our classifiers with some benchmark results available in the literature.
Land cover classification using multispectral satellite image is a very challenging task with numerous practical applications. We propose a multi-stage classifier that involves fuzzy rule extraction from the training data and then generation of a possibilistic label vector for each pixel using the fuzzy rule base. To exploit the spatial correlation of land cover types we propose four different information aggregation methods which use the possibilistic class label of a pixel and those of its eight spatial neighbors for making the final classification decision. Three of the aggregation methods use Dempster-Shafer theory of evidence while the remaining one is modeled after the fuzzy k-NN rule. The proposed methods are tested with two benchmark seven channel satellite images and the results are found to be quite satisfactory. They are also compared with a Markov random field (MRF) model-based contextual classification method and found to perform consistently better.
While performing knowledge-intensive tasks of professional nature, the knowledge workers need to access and process large volume of information. Apart from the quantity, they also require that the information received is of high quality in terms of authenticity and details. This, in turn, requires that the information delivered should also include argumentative support, exhibiting the reasoning process behind their development and provenance to indicate their lineage. In conventional document-centric practices for information management, such details are difficult to capture, represent/archive and retrieve/deliver. To achieve such capability we need to re-think some core issues of information management from the above requirements perspective. In this paper we develop a framework for comprehensive representation of information in archive, capturing informational contents along with their context. We shall call it the "Information Warehouse (IW)" framework of information archival. The IW is a significant yet technologically realizable conceptual advancement which can support efficiently some interesting classes of applications which can be very useful to the knowledge workers.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.