2003
DOI: 10.1109/tip.2003.819433
|View full text |Cite
|
Sign up to set email alerts
|

Lifting-based invertible motion adaptive transform (LIMAT) framework for highly scalable video compression

Abstract: We propose a new framework for highly scalable video compression, using a lifting-based invertible motion adaptive transform (LIMAT). We use motion-compensated lifting steps to implement the temporal wavelet transform, which preserves invertibility, regardless of the motion model. By contrast, the invertibility requirement has restricted previous approaches to either block-based or global motion compensation. We show that the proposed framework effectively applies the temporal wavelet transform along a set of … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
146
0

Year Published

2005
2005
2018
2018

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 213 publications
(146 citation statements)
references
References 19 publications
0
146
0
Order By: Relevance
“…To evaluate the coding performance of the proposed encoder, we compare it with a MCTF approach [25] and with two different configuration of the H.264/AVC reference software JM15.1 [46]. In the firs configuratio of JM15.1 (H.264 simp ), the test conditions are set so that only similar tools to the ones implemented in our encoder are enabled.…”
Section: A Video Coding Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…To evaluate the coding performance of the proposed encoder, we compare it with a MCTF approach [25] and with two different configuration of the H.264/AVC reference software JM15.1 [46]. In the firs configuratio of JM15.1 (H.264 simp ), the test conditions are set so that only similar tools to the ones implemented in our encoder are enabled.…”
Section: A Video Coding Resultsmentioning
confidence: 99%
“…Once the transform is defined we propose a coefficien reordering approach and an entropy coder, leading to a complete video encoder. On average, our proposed system achieves improvements of 1.24 dB with respect to a MCTF encoder [25] and 0.34 dB with respect to a simplifie encoder derived from H.264/AVC (reference software JM15.1 configure to use tools similar to those in the proposed encoder, i.e., 1 reference frame, no subpixel motion estimation, 16 × 16 inter and 4 × 4 intra modes), for a variety of standard QCIF and CIF video sequences. These improvements are more significan at high qualities, where they are in the range of 1 to 3 dBs with respect to the simplifie H.264/AVC video encoder, obtaining similar coding results in six out of twelve test sequences when comparing to JM15.1 configure allowing 5 reference frames, all the inter and intra modes available, and motion estimation similar to the proposed encoder (subpixel motion estimation disabled).…”
Section: Contributionsmentioning
confidence: 94%
See 1 more Smart Citation
“…Well-known examples include bandelets [78], edge-adapted multiscale transform [17], wedgelets [32,101], wavelet footprints [35], best tree-based representations [43,85], directionlets [96], motion-adaptive transform for videos [81], adaptive directional lifting [13,26], and grouplets [66]. We omit further discussions on these adaptive signal representations and refer readers to the references cited above for more details.…”
Section: Other Multiscale Geometric Representationsmentioning
confidence: 99%
“…In fact, most recent interest in wavelet-based video coding has migrated away from the traditional hybrid MC-feedback architecture considered here in favor of motion-compensated temporal filtering (MCTF) in order to provide full fidelity, spatial, and temporal scalability. Recent MCTF-based video coders have employed the RDWT (e.g., [33,2,22]), while others have used meshes (e.g., [27,28]). Indeed, our recent work [31,30] has focused on combining uniform meshes and the RDWT within the MCTF framework and has produced a scalable coder with state-of-the-art rate-distortion performance.…”
Section: Introductionmentioning
confidence: 99%