2008
DOI: 10.1111/j.1467-8659.2008.01115.x
|View full text |Cite
|
Sign up to set email alerts
|

Render2MPEG: A Perception‐based Framework Towards Integrating Rendering and Video Compression

Abstract: Currently 3D animation rendering and video compression are completely independent processes even if rendered frames are streamed on-the-fly within a client-server platform. In such scenario, which may involve time-varying transmission bandwidths and different display characteristics at the client side, dynamic adjustment of the rendering quality to such requirements can lead to a better use of server resources. In this work, we present a framework where the renderer and MPEG codec are coupled through a straigh… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
11
0

Year Published

2009
2009
2021
2021

Publication Types

Select...
5
4

Relationship

1
8

Authors

Journals

citations
Cited by 11 publications
(11 citation statements)
references
References 28 publications
0
11
0
Order By: Relevance
“…The hybrid video coders [Wang et al 2001] integrate lossy image compression tools with motioncompensation tools to exploit the temporal redundancy. The hybrid video coders have been standardized as, for example, MPEG-4 [Schafer 1998] and H.264/AVC [Wiegand et al 2003], which (or their variations) are used by several modern remote rendering systems [Noimark and Cohen-Or 2003;Lamberti and Sanna 2007;Jurgelionis et al 2009;De Winter et al 2006;Perlman et al 2010;Shi et al 2011a;Herzog et al 2008;Huang et al 2013]. The video coding tools proposed in Tran et al [1998], Bayazit [1999], and Liu et al [2007] adopted some or all of the real-time coding constraints discussed in Reddy and Chunduri [2006] and Schreier et al [2006].…”
Section: Data Compressionmentioning
confidence: 99%
“…The hybrid video coders [Wang et al 2001] integrate lossy image compression tools with motioncompensation tools to exploit the temporal redundancy. The hybrid video coders have been standardized as, for example, MPEG-4 [Schafer 1998] and H.264/AVC [Wiegand et al 2003], which (or their variations) are used by several modern remote rendering systems [Noimark and Cohen-Or 2003;Lamberti and Sanna 2007;Jurgelionis et al 2009;De Winter et al 2006;Perlman et al 2010;Shi et al 2011a;Herzog et al 2008;Huang et al 2013]. The video coding tools proposed in Tran et al [1998], Bayazit [1999], and Liu et al [2007] adopted some or all of the real-time coding constraints discussed in Reddy and Chunduri [2006] and Schreier et al [2006].…”
Section: Data Compressionmentioning
confidence: 99%
“…To maintain interactivity, the transfer size is typically reduced by employing JPEG [6] or MPEG [5] compression. An alternative technique employing a CUDA-based parallel compression method was presented by Lietsch and Marquardt [9], while Pajak et al [14] discuss efficient compression and streaming of frames rendered from a dynamic 3D model.…”
Section: In-situ and Remote Renderingmentioning
confidence: 99%
“…This is used for video compression, eg. MPEG [HKMS08]. The main difference to our method is that we do not display a visually acceptable approximation for selected in-between frames, but compute a correct simulation in each frame.…”
Section: Previous Workmentioning
confidence: 99%