This paper reviews the first challenge on high-dynamic range (HDR) imaging that was part of the New Trends in Image Restoration and Enhancement (NTIRE) workshop, held in conjunction with CVPR 2021. This manuscript focuses on the newly introduced dataset, the proposed methods and their results. The challenge aims at estimating a HDR image from one or multiple respective low-dynamic range (LDR) observations, which might suffer from underor over-exposed regions and different sources of noise. The challenge is composed by two tracks: In Track 1 only a single LDR image is provided as input, whereas in Track 2 three differently-exposed LDR images with inter-frame motion are available. In both tracks, the ultimate goal is to achieve the best objective HDR reconstruction in terms of PSNR with respect to a ground-truth image, evaluated both directly and with a canonical tonemapping operation.
With finer spatial scale, high-resolution images provide complex, spatial, and massive information on the earth's surface, which brings new challenges to remote sensing segmentation methods. In view of these challenges, finding a more effective segmentation model and parallel processing method is crucial to improve the segmentation accuracy and process efficiency of large-scale high-resolution images. To this end, this study proposed a minimum spanning tree (MST) model integrated into a regional-based parallel segmentation method. First, an image was decomposed into several blocks by regular tessellation. The corresponding homogeneous regions were obtained using the minimum heterogeneity rule (MHR) partitioning technique in a multicore parallel processing mode, and the initial segmentation results were obtained by the parallel block merging method. On this basis, a regionalized fuzzy c-means (FCM) method based on master-slave parallel mode was proposed to achieve fast and optimal segmentation. The proposed segmentation approach was tested on high-resolution images. The results from the qualitative assessment, quantitative evaluation, and parallel analysis verified the feasibility and validity of the proposed method.Remote Sens. 2020, 12, 783 2 of 29 high-resolution images. The abundant spatial and geometric information, which determines the spatial and geometric models, must be taken into account in building the segmentation model.Having massive data makes data the decomposition-based parallel algorithm a realistic choice to solve the complexity of computing time, and this approach has become one of the most effective ways to expand and optimize existing segmentation methods to address the massive remote sensing image processing requirements [11]. Many researchers have proposed various parallel image segmentation methods for large-scale high-resolution images [12][13][14]. For example, Xing et al. [15] proposed a parallel remote sensing image segmentation method combined with decomposition/merging mode and k-means algorithms based on geospatial cyberinfrastructure (GCI). Under this mode, the large-scale image is decomposed into several blocks, which are parallel-divided into regions. The merging process restores the block segmentation outcome to the whole image segmentation result and considers the merging of boundary region between blocks, which solves the problem of over-segmentation caused by decomposition. However, the k-means segmentation algorithm only considers spectral information and not the spatial and geometric information, resulting in difficulties in solving the complexity of high-resolution images.As a branch of mathematics, graph theory uses the graph as the primary study object and has the ability to describe the internal relations of vertex sets [16][17][18][19]. The remote sensing image representation model is built by mapping the coordinates and spectral information of the pixels into vertices. The adjacency relationship of the pixels is regarded as the connected edges between the vertices, whi...
In this paper, we present an attention-guided deformable convolutional network for hand-held multi-frame high dynamic range (HDR) imaging, namely ADNet. This problem comprises two intractable challenges of how to handle saturation and noise properly and how to tackle misalignments caused by object motion or camera jittering. To address the former, we adopt a spatial attention module to adaptively select the most appropriate regions of various exposure low dynamic range (LDR) images for fusion. For the latter one, we propose to align the gamma-corrected images in the feature-level with a Pyramid, Cascading and Deformable (PCD) alignment module. The proposed AD-Net shows state-of-the-art performance compared with previous methods, achieving a PSNR-l of 39.4471 and a PSNRµ of 37.6359 in NTIRE 2021 Multi-Frame HDR Challenge. Recently, several learning-based methods have been explored. Kalantari et al. proposed the first deep convolu-
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.