1992
DOI: 10.1109/72.129423
|View full text |Cite
|
Sign up to set email alerts
|

A VLSI neural processor for image data compression using self-organization networks

Abstract: An adaptive electronic neural network processor has been developed for high-speed image compression based on a frequency-sensitive self-organization algorithm. The performance of this self-organization network and that of a conventional algorithm for vector quantization are compared. The proposed method is quite efficient and can achieve near-optimal results. The neural network processor includes a pipelined codebook generator and a paralleled vector quantizer, which obtains a time complexity O(1) for each qua… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
17
0
2

Year Published

1995
1995
2009
2009

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 110 publications
(19 citation statements)
references
References 25 publications
0
17
0
2
Order By: Relevance
“…The core of the VQ consists of a 16 16 2-D array of distance estimation cells, configured to interconnect columns and rows according to the vector input components and template outputs. Each cell computes in parallel the absolute difference distance between one component of the input vector and the corresponding component of one of the template vectors (1) The MAD distance between input and template vectors is accumulated along rows (2) and presented to the WTA, which selects the single winner (3) All computations in the VQ processor are performed in parallel, including the distance estimations and the winnertake-all search. It is by now well known that parallel architectures allow energetically more efficient implementation in CMOS for a given computational bandwidth requirement.…”
Section: Architecturementioning
confidence: 99%
See 2 more Smart Citations
“…The core of the VQ consists of a 16 16 2-D array of distance estimation cells, configured to interconnect columns and rows according to the vector input components and template outputs. Each cell computes in parallel the absolute difference distance between one component of the input vector and the corresponding component of one of the template vectors (1) The MAD distance between input and template vectors is accumulated along rows (2) and presented to the WTA, which selects the single winner (3) All computations in the VQ processor are performed in parallel, including the distance estimations and the winnertake-all search. It is by now well known that parallel architectures allow energetically more efficient implementation in CMOS for a given computational bandwidth requirement.…”
Section: Architecturementioning
confidence: 99%
“…The circuit implementation of the WTA function combines the compact sizing and modularity of a linear architecture as in [3], [11], and [12] with positive feedback for fast and decisive output settling independent of signal levels, as in [4] and [5]. Typical positive feedback structures for WTA operation use a logarithmic tree [5] or a fully interconnected network [4], with implementation complexities of order and , respectively, being the number of WTA inputs.…”
Section: B Winner-take-all Circuitrymentioning
confidence: 99%
See 1 more Smart Citation
“…On the other hand, the human vision system is very preferment in terms of image analysis which extracts relevant information. In the last few years, technical progresses in signal acquisition [2,3] and processing have allowed the development of new artificial vision system which equals or overpasses human capacities.…”
Section: Introductionmentioning
confidence: 99%
“…Melton et al [21] noted that, in a large analog network the large number of analog signals which must pass between chips will exceed the available input-output (I/O) resources, while, the noise and the parasitic capacitances on extended I/O lines will distort operation of the network and possibly lead to erronous results. Also, use of ancillary chips such as D/A and external weight storage [31] adds considerable overhead to system-level implementation. This is a serious limitation of large analog computing networks.…”
Section: Introductionmentioning
confidence: 99%