2011
DOI: 10.1109/tsp.2011.2109953
|View full text |Cite
|
Sign up to set email alerts
|

Memory Efficient Modular VLSI Architecture for Highthroughput and Low-Latency Implementation of Multilevel Lifting 2-D DWT

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
92
0

Year Published

2014
2014
2021
2021

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 62 publications
(92 citation statements)
references
References 34 publications
0
92
0
Order By: Relevance
“…Since, no redundant computation is available in lifting 2-D DWT, there is no scope to reduce multiplier complexity without compromising on the throughput rate. However, we observe that many multipliers of block-lifting 2-D DWT structures [12,13,15,17] share a common input operand. A group of multipliers with a common multiplying operand can select their partial product terms from a common set using Booth encoding scheme.…”
Section: Introductionmentioning
confidence: 91%
“…Since, no redundant computation is available in lifting 2-D DWT, there is no scope to reduce multiplier complexity without compromising on the throughput rate. However, we observe that many multipliers of block-lifting 2-D DWT structures [12,13,15,17] share a common input operand. A group of multipliers with a common multiplying operand can select their partial product terms from a common set using Booth encoding scheme.…”
Section: Introductionmentioning
confidence: 91%
“…Unlike RPA-based designs, folded design involves simple control circuitry and it has 100 % HUE. Keeping this in view, several architectures based have been proposed for efficient implementation of lifting 2-D DWT [5][6][7][8][9][10][11][12]. Most of the designs differ by their number of arithmetic components, on-chip memory, cycle period and throughput rate.…”
Section: Two-dimensional (2-d) Discrete Wavelet Transform (Dwt)mentioning
confidence: 99%
“…Low-pass block a (9,8) a (10,8) a (11,8) a (12,8) a (13,8) a (14,8) a (15,8) h (4,4) h (4,5) h (4,7) h (4,8) l (4,5) l (4,6) l (4,7) l (4,8) …”
Section: Row-processor Column-processormentioning
confidence: 99%
“…The existing DWT architectures can be classified into two categories, namely convolution-based and lifting-based [5]. Compared with the convolution-based architecture, the lifting-based architecture has several advantages with respect to energy efficiency, such as lower computation complexity and memory-efficient in-place computation [4].…”
Section: A Lifting Schemementioning
confidence: 99%
“…The line-based architectures [5]- [8] read the image in line-by-line order. A high-throughput line-based architecture for multi-level DWT is proposed in [5], with a transposition memory of length 2.5M and a temporal memory of length 3M for an image of size MN.…”
Section: B Related Workmentioning
confidence: 99%