This paper describes an extension of the high efficiency video coding (HEVC) standard for coding of multi-view video and depth data. In addition to the known concept of disparity-compensated prediction, inter-view motion parameter, and inter-view residual prediction for coding of the dependent video views are developed and integrated. Furthermore, for depth coding, new intra coding modes, a modified motion compensation and motion vector coding as well as the concept of motion parameter inheritance are part of the HEVC extension. A novel encoder control uses view synthesis optimization, which guarantees that high quality intermediate views can be generated based on the decoded data. The bitstream format supports the extraction of partial bitstreams, so that conventional 2D video, stereo video, and the full multi-view video plus depth format can be decoded from a single bitstream. Objective and subjective results are presented, demonstrating that the proposed approach provides 50% bit rate savings in comparison with HEVC simulcast and 20% in comparison with a straightforward multi-view extension of HEVC without the newly developed coding tools.
This paper presents a new approach for 3D video coding, where the video and the depth component of an MVD representation are jointly coded in an integrated framework This enables a new type of prediction for exploiting the correlations of video and depth signals in addition to existing methods for temporal and inter-view prediction. Our new method is referred to as inter-component prediction and we adopt it for predicting non-rectangular partitions in depth blocks. By dividing the block into two regions, each represented with a constant value, such block partitions are well-adapted to the characteristics of depth maps. The results show that this approach reduces the bit rate of the depth component by up to 11% and leads to an increased quality of rendered views
The recently finalized Versatile Video Coding (VVC) standard promises to reduce the video bitrate by 50% compared to its predecessor, High Efficiency Video Coding (HEVC). The increased efficiency comes at a cost of increased computational burden. The Fraunhofer Versatile Video Encoder VVenC is the first openly available optimized implementation providing access to VVC's efficiency at only 46% of the runtime of the VVC test model VTM, when not using multi-threading. An alternative operating point allows 30× faster encoding for the price of around 12% bitrate increase, while still providing around 38% bitrate reduction compared to HEVC test model HM. In the fastest configuration, VVenC runs over 140× faster than VTM while still providing over 10% bitrate reduction compared to HM. Even faster encoding is possible with multi-threading. This paper provides an overview of VVenC's main features and some evaluation results.
The presented approach for 3D video coding uses the multiview video plus depth format, in which a small number of video views as well as associated depth maps are coded. Based on the coded signals, additional views required for displaying the 3D video on an autostereoscopic display can be generated by depth image based rendering techniques. The developed coding scheme represents an extension of HEVC, similar to the MVC extension of H.264/AVC. However, in addition to the well-known disparity-compensated prediction advanced techniques for inter-view and inter-component prediction, the representation of depth blocks, and the encoder control for depth signals have been integrated. In comparison to simulcasting the different signals using HEVC, the proposed approach provides about 40% and 50% bit rate savings for the tested configurations with 2 and 3 views, respectively. Bit rate reductions of about 20% have been obtained in comparison to a straightforward multiview extension of HEVC without the newly developed coding tools
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.