2005
DOI: 10.1016/j.image.2004.08.006
|View full text |Cite
|
Sign up to set email alerts
|

Unconstrained motion compensated temporal filtering (UMCTF) for efficient and flexible interframe wavelet video coding

Abstract: We introduce an efficient and flexible framework for temporal filtering in wavelet-based scalable video codecs called unconstrained motion compensated temporal filtering (UMCTF). UMCTF allows for the use of different filters and temporal decomposition structures through a set of controlling parameters that may be easily modified during the coding process, at different granularities and levels. The proposed framework enables the adaptation of the coding process to the video content, network and end-device chara… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
14
0

Year Published

2006
2006
2009
2009

Publication Types

Select...
5
2
1

Relationship

2
6

Authors

Journals

citations
Cited by 28 publications
(14 citation statements)
references
References 27 publications
0
14
0
Order By: Relevance
“…In a first step, motion-compensated temporal filtering of the input is employed in order to remove the temporal correlation among different frames: in essence, MCTF corresponds to a discrete wavelet transform (DWT) performed along the motion trajectories. The temporal transform is implemented via lifting [23], as illustrated in the simple example of Haar MCTF [6], [22] shown in Fig. 1.…”
Section: Single Description Mctf-based Video Codingmentioning
confidence: 99%
See 2 more Smart Citations
“…In a first step, motion-compensated temporal filtering of the input is employed in order to remove the temporal correlation among different frames: in essence, MCTF corresponds to a discrete wavelet transform (DWT) performed along the motion trajectories. The temporal transform is implemented via lifting [23], as illustrated in the simple example of Haar MCTF [6], [22] shown in Fig. 1.…”
Section: Single Description Mctf-based Video Codingmentioning
confidence: 99%
“…Finally, we indicate the subband of a frame as . With these notations, (5) compacts into (6), shown at the bottom of the next page. The above formulation of the overall distortion in the reconstructed sequence is particularly useful because the proposed system quantizes each subband and motion field independently, and each contribution to the final distortion is individually evaluated at encoding time, as explained in the following section.…”
Section: A Distortion Estimation In Mctf Followed By Quantizationmentioning
confidence: 99%
See 1 more Smart Citation
“…The salient aspect of this coder lies in that we employ multihypothesis motion compensation (MHMC) within the MCTF to combat the uncertainty inherent in estimating motion trajectories for MCTF, thereby achieving rate-distortion performance significantly superior to the usual single-hypothesis MCTF approach. Although multihypothesis has been used in conjunction with MCTF before (e.g., [9]- [11], [15] propose both spatially and temporally diverse multihypothesis MCTF predictions), in our proposed system, we employ a new class of MHMC-phase-diversity multihypothesis [17], [18]. Specifically, phase-diversity MHMC is implemented by deploying MCTF in the domain of a spatially redundant wavelet transform such that multiple hypothesis temporal filterings are combined implicitly in the form of an inverse transform.…”
mentioning
confidence: 99%
“…This filtering is performed in the direction of motion. Assuming HAAR wavelets, MCTF can be written using the lifting formulation [7] as follows:…”
Section: Motion Compensated Temporal Filteringmentioning
confidence: 99%