2021
DOI: 10.3390/electronics10050603
|View full text |Cite
|
Sign up to set email alerts
|

Area-Time Efficient Two-Dimensional Reconfigurable Integer DCT Architecture for HEVC

Abstract: In this paper, we present area-time efficient reconfigurable architectures for the implementation of the integer discrete cosine transform (DCT), which supports all the transform lengths to be used in High Efficiency Video Coding (HEVC). We propose three 1D reconfigurable architectures that can be configured for the computation of the DCT of any of the prescribed lengths such as 4, 8, 16, and 32. It is shown that matrix multiplication schemes involving fewer adders can be used to derive parallel architectures … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 12 publications
0
2
0
Order By: Relevance
“…Maher and Srikanthan demonstrated that parallel topologies for 1D integer DCT of varying lengths can be derived from matrix multiplication schemes utilizing minimal adders. The suggested 2D DCT architecture makes use of a unique transposition buffer that, without altering the dimension of the transposition buffer, gives twice the throughput of existing solutions [25]. However, the N point 1D-DCT architecture involves the N/2 point 1D-DCT blocks.…”
Section: Existing Modelsmentioning
confidence: 99%
“…Maher and Srikanthan demonstrated that parallel topologies for 1D integer DCT of varying lengths can be derived from matrix multiplication schemes utilizing minimal adders. The suggested 2D DCT architecture makes use of a unique transposition buffer that, without altering the dimension of the transposition buffer, gives twice the throughput of existing solutions [25]. However, the N point 1D-DCT architecture involves the N/2 point 1D-DCT blocks.…”
Section: Existing Modelsmentioning
confidence: 99%
“…Several 8-point approximated DCTs are proposed in [16][17][18] with different techniques to derive efficient transforms with a lower number of required additions. Larger transforms such as 16-point DCT can offer better performances as more coding gains compared with 4-point or 8-point transforms [19][20][21].…”
Section: Introductionmentioning
confidence: 99%