Accurately quantifying water inundation dynamics in terms of both spatial distributions and temporal variability is essential for water resources management. Currently, the water map is usually derived from synthetic aperture radar (SAR) data with the support of auxiliary datasets, using thresholding methods and followed by morphological operations to further refine the results. However, auxiliary datasets may lose efficacy on large plain areas, whilst the parameters of morphological operations are hard to be decided in different situations. Here, a heuristic and automatic water extraction (HAWE) method is proposed to extract the water map from Sentinel-1 SAR data. In the HAWE, we integrate tile-based thresholding and the active contour model, in which the former provides a convincing initial water map used as a heuristic input, and the latter refines the initial map by using image gradient information. The proposed approach was tested on the Dongting Lake plain (China) by comparing the extracted water map with the reference data derived from the Sentinel-2 dataset. For the two selected test sites, the overall accuracy of water classification is between 94.90% and 97.21% whilst the Kappa coefficient is within the range of 0.89 and 0.94. For the entire study area, the overall accuracy is between 94.32% and 96.7% and the Kappa coefficient ranges from 0.80 to 0.90. The results show that the proposed method is capable of extracting water inundations with satisfying accuracy.
A plenoptic light field (LF) camera places an array of microlenses in front of an image sensor in order to separately capture different directional rays arriving at an image pixel. Using a conventional Bayer pattern, data captured at each pixel is a single color component (R, G or B). The sensed data then undergoes demosaicking (interpolation of RGB components per pixel) and conversion to an array of sub-aperture images (SAIs). In this paper, we propose a new LF image coding scheme based on graph lifting transform (GLT), where the acquired sensor data are coded in the original captured form without pre-processing. Specifically, we directly map raw sensed color data to the SAIs, resulting in sparsely distributed color pixels on 2D grids, and perform demosaicking at the receiver after decoding. To exploit spatial correlation among the sparse pixels, we propose a novel intra-prediction scheme, where the prediction kernel is determined according to the local gradient estimated from already coded neighboring pixel blocks. We then connect the pixels by forming a graph, modeling the prediction residuals statistically as a Gaussian Markov Random Field (GMRF). The optimal edge weights are computed via a graph learning method using a set of training SAIs. The residual data is encoded via low-complexity GLT. Experiments show that at high PSNRsimportant for archiving and instant storage scenarios-our method outperformed significantly a conventional light field image coding scheme with demosaicking followed by High Efficiency Video Coding (HEVC).
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.