2007 International Conference on Multimedia and Ubiquitous Engineering (MUE'07) 2007
DOI: 10.1109/mue.2007.21
|View full text |Cite
|
Sign up to set email alerts
|

A Memory and Performance Optimized Architecture of Deblocking Filter in H.264/AVC

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
10
0

Year Published

2008
2008
2012
2012

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 17 publications
(10 citation statements)
references
References 3 publications
0
10
0
Order By: Relevance
“…A major effort has been spent on individual blocks of the H.264 codec e.g. DCT ( [33][34][35]) ME ( [36][37][38][39]), and De-blocking Filter ( [40][41][42][43][44]). Instead of targeting one specific component, we have implemented 12 hardware accelerators (see Section 3) for the major computationalintensive parts of the H.264 encoder and used them for evaluating our proposed application structure in the result section.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…A major effort has been spent on individual blocks of the H.264 codec e.g. DCT ( [33][34][35]) ME ( [36][37][38][39]), and De-blocking Filter ( [40][41][42][43][44]). Instead of targeting one specific component, we have implemented 12 hardware accelerators (see Section 3) for the major computationalintensive parts of the H.264 encoder and used them for evaluating our proposed application structure in the result section.…”
Section: Related Workmentioning
confidence: 99%
“…[40] uses a 2×4×4 internal buffer and 32×16 internal SRAM for buffering operation of Deblocking Filter with I/O bandwidth of 32-bits. All filtering options are calculated in parallel while the condition computation is done in a control unit.…”
Section: Related Workmentioning
confidence: 99%
“…This approach provides better data reuse than one dimensional filtering and pixels are written earlier to the main memory. In [8][9][10], two-dimensional filtering order is used by alternating horizontal filtering and vertical filtering on the block, and local buffer size is reduced. In [7], [11][12][13], the pipelined filter design offers advantages of reducing the critical path of the design, increasing the clock speed, and reducing latency.…”
Section: Introductionmentioning
confidence: 99%
“…Previous research of fast deblocking filter focuses on two fields: pure software implementations [1][2] and hardware implementations [3][4][5][6][7][8][9][10][11] . The complexity of the filter is mainly due to the high adaptivity that leads to a lot of conditional computations that are executed in the inner loop of the algorithm which occupies about 30% to 90% of the overall computation time [3] .…”
Section: Introductionmentioning
confidence: 99%
“…In Refs. [4][5][7][8], the advanced 2D processing orders are proposed which reduced operation cycles and memory accesses. In Refs.…”
Section: Introductionmentioning
confidence: 99%