1997
DOI: 10.1109/78.575701
|View full text |Cite
|
Sign up to set email alerts
|

Discrete wavelet transform: data dependence analysis and synthesis of distributed memory and control array architectures

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
19
0

Year Published

2000
2000
2016
2016

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 40 publications
(19 citation statements)
references
References 19 publications
0
19
0
Order By: Relevance
“…Therefore, a number of studies relating to computational optimizations and parallel processing of the DWT have been carried out. Examples of such works include [5,13,10,8].…”
Section: Introductionmentioning
confidence: 98%
“…Therefore, a number of studies relating to computational optimizations and parallel processing of the DWT have been carried out. Examples of such works include [5,13,10,8].…”
Section: Introductionmentioning
confidence: 98%
“…The algorithms can be implemented in SIMD, MIMD and pipeline architectures on the configured system. Other works on implemented wavelet transform and coding are restricted to expensive hardware [1], expensive and fast network architectures [2], or involves transputer systems making the programming very complex [3]. Our system is more robust in terms of implementation and highly effective in the transform representation for distributed computing applications.…”
Section: Introductionmentioning
confidence: 98%
“…Therefore, the implementation of the DWT by means of dedicated VLSI application-specific integrated circuits has recently captivated the attention of a number of researchers, and many DWT architectures have already been proposed [12]- [25]. Some of these devices have been targeted to have a low hardware complexity, but they require at least 2 clock cycles (ccs) to compute the DWT of a sequence having samples (e.g., the devices proposed in [12]- [14], the architecture A2 in [15], etc.). Nevertheless, also a large number of devices, having a period of approximately ccs, has been designed (e.g., the three architectures in [14] when they are provided with a doubled hardware, the architecture A1 in [15], the architectures in [16]- [18], the parallel filter in [19], etc.).…”
Section: Introductionmentioning
confidence: 99%
“…Some of these devices have been targeted to have a low hardware complexity, but they require at least 2 clock cycles (ccs) to compute the DWT of a sequence having samples (e.g., the devices proposed in [12]- [14], the architecture A2 in [15], etc.). Nevertheless, also a large number of devices, having a period of approximately ccs, has been designed (e.g., the three architectures in [14] when they are provided with a doubled hardware, the architecture A1 in [15], the architectures in [16]- [18], the parallel filter in [19], etc.). Most of these architectures exploit the recursive pyramid algorithm (RPA) [26] or similar scheduling techniques in order both to reduce memory requirement and to employ only one or two filter units, independently from the number of decomposition levels to be computed.…”
Section: Introductionmentioning
confidence: 99%