The encoding format of the 3D extension of high efficiency video coding (3D-HEVC) consists of a multiview color texture and an associated depth map. Because of the unique characteristics of the depth map, advanced coding techniques are designed for depth map coding at the expense of computational complexity. In this paper, fast algorithms are conceived to accelerate the intra coding time of the depth map based on boundary continuity. First, the proposed fast prediction unit (PU) mode decision reduces the number of conventional intra prediction modes based on calculating the total sum of squares (TSS) of the PU boundaries. Second, the proposed fast depth modeling mode (DMM) decision makes use of the variances of the boundary pixels to determine the execution of the DMM. Third, the proposed coding unit (CU) early termination algorithm decides whether to further split the current CU by utilizing the thresholds of the TSS and the rate-distortion cost (RD-cost). The experimental results show that the proposed algorithm provides better performance in terms of coding speed and bitrate than the algorithm in previous work. The coding time of the depth map is reduced by 56.08%, while the Bjøntegaard delta bitrate (BD-BR) is only increased by 0.32% for the synthesis view.
3D-HEVC (The 3D Extension of High Efficiency Video Coding), the latest international standard for 3D video coding, supports multiview plus depth 3D video format to enrich multimedia applications. For the texture coding, 3D-HEVC utilizes not only the information of temporal and spatial domains but also that of inter-view domain. However, the time consumption and complexity of 3D-HEVC also increase significantly. In this paper, fast texture coding algorithm for 3D-HEVC is proposed. We individually calculate the Pearson correlation coefficients by the rate-distortion costs (RD-costs) of coding tree units (CTUs) in the temporal, spatial and inter-view domains to analyze the correlations for independent view and dependent view. The proposed coding algorithm is based on the coding information of CTUs with higher correlations. The fast algorithm predicts and dynamically adjusts the depth range of the coding unit (CU). The prediction unit (PU) mode decision is proposed according to the complexity and partition direction of the best PU modes obtained from the highly correlated CTUs. The search range is adaptively adjusted for motion estimation. In addition, the RD-cost threshold is estimated to early terminate the CU split. Experimental results show that the proposed fast texture coding algorithm reduces 40.75% of the texture coding time on average and outperforms numerous previous works significantly.
This paper designs a novel method to reduce the coding complexity of 3D‐HEVC encoder by utilizing the properties of human visual perception. Two vision‐oriented edge detections are proposed: for colour texture detection, the authors adopt the Just‐Noticeable Distortion (JND); for depth map, the authors combine the Sample Adaptive Offset (SAO) and the Just Noticeable Depth Difference (JNDD) model. The authors also analyse the properties of colour texture and depth map to classify the coding tree unit (CTU) into various kinds of types, including complex‐edge CTU, moderate‐edge CTU and homogeneous CTU. Besides, fast mode decisions and early termination criteria are performed individually on each type of CTUs according to their characteristics. Especially for those CTUs with more edge information, the proposed projection‐based fast mode decision and residual‐based early termination preserve important colour texture while speeding up the coding at the same time. The proposed vision‐oriented algorithm reduces 31.981% of the overall average coding time with only 1.580% BD‐Bitrate increase. Experimental results show that the proposed algorithm can provide considerable time‐saving while still maintain the video quality, which outperforms the previous researches.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.