Feature matching, which refers to establishing reliable correspondence between two sets of features (particularly point features), is a critical prerequisite in feature-based registration. In this paper, we propose a flexible and general algorithm, which is called locally linear transforming (LLT), for both rigid and nonrigid feature matching of remote sensing images. We start by creating a set of putative correspondences based on the feature similarity and then focus on removing outliers from the putative set and estimating the transformation as well. We formulate this as a maximum-likelihood estimation of a Bayesian model with hidden/latent variables indicating whether matches in the putative set are outliers or inliers. To ensure the well-posedness of the problem, we develop a local geometrical constraint that can preserve local structures among neighboring feature points, and it is also robust to a large number of outliers. The problem is solved by using the expectation-maximization algorithm (EM), and the closed-form solutions of both rigid and nonrigid transformations are derived in the maximization step. In the nonrigid case, we model the transformation between images in a reproducing kernel Hilbert space (RKHS), and a sparse approximation is applied to the transformation that reduces the method computation complexity to linearithmic. Extensive experiments on real remote sensing images demonstrate accurate results of LLT, which outperforms current state-of-the-art methods, particularly in the case of severe outliers (even up to 80%).
In this paper, we propose a novel Convolutional Neural Network (CNN) structure for general-purpose multi-task learning (MTL), which enables automatic feature fusing at every layer from different tasks. This is in contrast with the most widely used MTL CNN structures which empirically or heuristically share features on some specific layers (e.g., share all the features except the last convolutional layer). The proposed layerwise feature fusing scheme is formulated by combining existing CNN components in a novel way, with clear mathematical interpretability as discriminative dimensionality reduction, which is referred to as Neural Discriminative Dimensionality Reduction (NDDR). Specifically, we first concatenate features with the same spatial resolution from different tasks according to their channel dimension. Then, we show that the discriminative dimensionality reduction can be fulfilled by 1 × 1 Convolution, Batch Normalization, and Weight Decay in one CNN. The use of existing CNN components ensures the end-to-end training and the extensibility of the proposed NDDR layer to various state-of-the-art CNN architectures in a "plug-andplay" manner. The detailed ablation analysis shows that the proposed NDDR layer is easy to train and also robust to different hyperparameters. Experiments on different task sets with various base network architectures demonstrate the promising performance and desirable generalizability of our proposed method. The code of our paper is available at https
As a fascinating topological phase of matter, Weyl semimetals host chiral fermions with distinct chiralities and spin textures. Optical excitations involving those chiral fermions can induce exotic carrier responses, and in turn lead to novel optical phenomena. Here, we discover strong coherent chiral terahertz emission from the Weyl semimetal TaAs and demonstrate unprecedented manipulation over its polarization on a femtosecond timescale. Such polarization control is achieved via the colossal ultrafast photocurrents in TaAs arising from the circular or linear photogalvanic effect. We unravel that the chiral ultrafast photocurrents are attributed to the large band velocity changes when the Weyl fermions are excited from the Weyl bands to the high-lying bands. The photocurrent generation is maximized at near-IR frequency range close to 1.5 eV. Our findings provide an entirely new design concept for creating chiral photon sources using quantum materials and open up new opportunities for developing ultrafast opto-electronics using Weyl physics.
This paper addresses the problem of face recognition when there is only few, or even only a single, labeled examples of the face that we wish to recognize. Moreover, these examples are typically corrupted by nuisance variables, both linear (i.e., additive nuisance variables, such as bad lighting and wearing of glasses) and non-linear (i.e., non-additive pixel-wise nuisance variables, such as expression changes). The small number of labeled examples means that it is hard to remove these nuisance variables between the training and testing faces to obtain good recognition performance. To address the problem, we propose a method called semi-supervised sparse representation-based classification. This is based on recent work on sparsity, where faces are represented in terms of two dictionaries: a gallery dictionary consisting of one or more examples of each person, and a variation dictionary representing linear nuisance variables (e.g., different lighting conditions and different glasses). The main idea is that: 1) we use the variation dictionary to characterize the linear nuisance variables via the sparsity framework and 2) prototype face images are estimated as a gallery dictionary via a Gaussian mixture model, with mixed labeled and unlabeled samples in a semi-supervised manner, to deal with the non-linear nuisance variations between labeled and unlabeled samples. We have done experiments with insufficient labeled samples, even when there is only a single labeled sample per person. Our results on the AR, Multi-PIE, CAS-PEAL, and LFW databases demonstrate that the proposed method is able to deliver significantly improved performance over existing methods.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.