In this paper, we present a new ID-based two-party authenticated key exchange (AKE) protocol, which makes use of a new technique called twin Diffie-Hellman problem proposed by Cash, Kiltz and Shoup. We show that our scheme is secure under bilinear Diffie-Hellman (BDH) assumption in the enhanced Canetti-Krawczyk (eCK) model, which better supports the adversary's queries than previous AKE models. To the best of our knowledge, our scheme is the first ID-based AKE protocol provably secure in eCK model.
Semantic segmentation is a fundamental task in remote sensing image understanding. Recently, Deep Convolutional Neural Networks (DCNNs) have considerably improved the performance of the semantic segmentation of natural scenes. However, it is still challenging for Very High Resolution (VHR) remote sensing images. Due to the large and complex scenes as well as the influence of illumination and imaging angle, it is particularly difficult for the existing methods to accurately obtain the category of pixels at object boundaries-the so-called boundary blur. We propose a framework called Boundary-Aware Semi-Supervised Semantic Segmentation Network (BAS 4 Net), which obtains more accurate segmentation results without additional annotation workload, especially at the object boundaries. The Channel-weighted Multi-scale Feature (CMF) module balances semantic and spatial information and the Boundary Attention Module (BAM) weights the features with rich semantic boundary information to alleviate the boundary blur. Additionally, to decrease the amount of difficult and tedious manual labeling of remote sensing images, a discriminator network infers pseudolabels from unlabeled images to assist semi-supervised learning and further improves the performance of the segmentation network. To validate the effectiveness of the proposed framework, extensive experiments have been performed on both the ISPRS Vaihingen dataset and the novel remote sensing dataset AIR-SEG with more categories and complex boundaries. The results demonstrate a significant improvement of accuracy especially on boundaries and for small objects.
Abstract:In recent years, Fully Convolutional Networks (FCN) have led to a great improvement of semantic labeling for various applications including multi-modal remote sensing data. Although different fusion strategies have been reported for multi-modal data, there is no in-depth study of the reasons of performance limits. For example, it is unclear, why an early fusion of multi-modal data in FCN does not lead to a satisfying result. In this paper, we investigate the contribution of individual layers inside FCN and propose an effective fusion strategy for the semantic labeling of color or infrared imagery together with elevation (e.g., Digital Surface Models). The sensitivity and contribution of layers concerning classes and multi-modal data are quantified by recall and descent rate of recall in a multi-resolution model. The contribution of different modalities to the pixel-wise prediction is analyzed explaining the reason of the poor performance caused by the plain concatenation of different modalities. Finally, based on the analysis an optimized scheme for the fusion of layers with image and elevation information into a single FCN model is derived. Experiments are performed on the ISPRS Vaihingen 2D Semantic Labeling dataset (infrared and RGB imagery as well as elevation) and the Potsdam dataset (RGB imagery and elevation). Comprehensive evaluations demonstrate the potential of the proposed approach.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.