A depth image is three-dimensional (3D) information used for virtual view synthesis in 3D video system. In depth coding, the object boundaries are hard to compress and severely affect the rendering quality since they are sensitive to coding errors. In this paper, we propose a depth boundary reconstruction filter and utilize it as an in-loop filter to code the depth video. The proposed depth boundary reconstruction filter is designed considering occurrence frequency, similarity, and closeness of pixels. Experimental results demonstrate that the proposed depth boundary reconstruction filter is useful for efficient depth coding as well as high-quality 3D rendering. IEEE Transactions on Circuits and Systems for Video TechnologyThis work may not be copied or reproduced in whole or in part for any commercial purpose. Permission to copy in whole or in part without payment of fee is granted for nonprofit educational and research purposes provided that all such whole or partial copies include the following: a notice that such copying is by permission of Mitsubishi Electric Research Laboratories, Inc.; an acknowledgment of the authors and individual contributions to the work; and all applicable portions of the copyright notice. Copying, reproduction, or republishing for any other purpose shall require a license with payment of fee to Mitsubishi Electric Research Laboratories, Inc. All rights reserved. Abstract-A depth image is three-dimensional (3D) information used for virtual view synthesis in 3D video system. In depth coding, the object boundaries are hard to compress and severely affect the rendering quality since they are sensitive to coding errors. In this paper, we propose a depth boundary reconstruction filter and utilize it as an in-loop filter to code the depth video. The proposed depth boundary reconstruction filter is designed considering occurrence frequency, similarity, and closeness of pixels. Experimental results demonstrate that the proposed depth boundary reconstruction filter is useful for efficient depth coding as well as high-quality 3D rendering.
Virtual view synthesis is one of the most important techniques to realize free viewpoint television and three-dimensional (3D) video. In this article, we propose a view synthesis method to generate high-quality intermediate views in such applications and new evaluation metrics named as spatial peak signal-to-noise ratio and temporal peak signal-to-noise ratio to measure spatial and temporal consistency, respectively. The proposed view synthesis method consists of five major steps: depth preprocessing, depth-based 3D warping, depth-based histogram matching, base plus assistant view blending, and depth-based hole-filling. The efficiency of the proposed view synthesis method has been verified by evaluating the quality of synthesized images with various metrics such as peak signal-to-noise ratio, structural similarity, discrete cosine transform (DCT)-based video quality metric, and the newly proposed metrics. We have also confirmed that the synthesized images are objectively and subjectively natural.
Guaranteeing interoperability between devices and applications is the core role of standards organizations. Since its first JPEG standard in 1992, the Joint Photographic Experts Group (JPEG) has published several image coding standards that have been successful in a plethora of imaging markets. Recently, these markets have become subject to potentially disruptive innovations owing to the rise of new imaging modalities such as light fields, point clouds, and holography. These so‐called plenoptic modalities hold the promise of facilitating a more efficient and complete representation of 3D scenes when compared to classic 2D modalities. However, due to the heterogeneity of plenoptic products that will hit the market, serious interoperability concerns have arisen. In this paper, we particularly focus on the holographic modality and outline how the JPEG committee has addressed these tremendous challenges. We discuss the main use cases and provide a preliminary list of requirements. In addition, based on the discussion of real‐valued and complex data representations, we elaborate on potential coding technologies that range from approaches utilizing classical 2D coding technologies to holographic content‐aware coding solutions. Finally, we address the problem of visual quality assessment of holographic data covering both visual quality metrics and subjective assessment methodologies.
This paper presents an efficient view synthesis distortion estimation method for 3-D video. It also introduces the application of this method to Advanced Video Coding (AVC)-and High Efficiency Video Coding (HEVC)-compatible 3-D video coding. Although the proposed view synthesis distortion scheme is generic, its use for actual 3-D video codec systems addresses the many issues caused by different video-coding formats and restrictions. The solutions for these issues are herein proposed. The simulation results show that the proposed scheme can achieve approximately 5.4% and 10.2% coding gains for AVCand HEVC-compatible 3-D coding, respectively. In addition, the results show the remarkable complexity reduction of the scheme compared to the view synthesis optimization method currently used in 3-D-HEVC. The proposed method has been adopted into the presently developing AVC-and HEVC-compatible test model reference software. Index Terms-3-D Advanced Video Coding (AVC), 3-D High Efficiency Video Coding (HEVC), 3-D video (3-DV) codec, depth map coding, multiview coding, view synthesis distortion, view synthesis distortion estimation.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.