We propose a deep bilinear model for blind image quality assessment (BIQA) that handles both synthetic and authentic distortions. Our model consists of two convolutional neural networks (CNN), each of which specializes in one distortion scenario. For synthetic distortions, we pre-train a CNN to classify image distortion type and level, where we enjoy largescale training data. For authentic distortions, we adopt a pretrained CNN for image classification. The features from the two CNNs are pooled bilinearly into a unified representation for final quality prediction. We then fine-tune the entire model on target subject-rated databases using a variant of stochastic gradient descent. Extensive experiments demonstrate that the proposed model achieves superior performance on both synthetic and authentic databases. Furthermore, we verify the generalizability of our method on the Waterloo Exploration Database using the group maximum differentiation competition.
Estimating the characteristics of soil surface represents a significant area in applications such as hydrology, climatology, and agriculture. Signals transmitted from Global Navigation Satellite Systems (GNSSs) can be used for soil monitoring after reflection from the Earth's surface. In this paper, the feasibility of obtaining surface characteristics from the power ratio of lefthand (LH) reflected signal-to-noise ratio (SNR) over direct righthand (RH) is investigated. The analysis was done regardless of the surface roughness and the incoherent components of the reflected power. First, the analysis was carried out on data collected during several in situ measurements in controlled environments with known characteristics. Then, further data were collected by a GNSS receiver prototype installed on a small aircraft and analyzed. This system was calibrated on the basis of signals reflected from water. The reflectivity and the estimated permittivity showed good correlation with the types of underlying terrain.
Urban areas have been focused recently on the remote sensing applications since their function closely relates to the distribution of built-up areas, where reflectivity or scattering characteristics are the same or similar. Traditional pixel-based methods cannot discriminate the types of urban built-up areas very well. This paper investigates a deep learning-based classification method for remote sensing images, particularly for high spatial resolution remote sensing (HSRRS) images with various changes and multiscene classes. Specifically, to help develop the corresponding classification methods in urban built-up areas, we consider four deep neural networks (DNNs): 1) convolutional neural network (CNN); 2) capsule networks (CapsNet); 3) same model with a different training rounding based on CNN (SMDTR-CNN); and 4) same model with different training rounding based on CapsNet (SMDTR-CapsNet). The performances of the proposed methods are evaluated in terms of overall accuracy, kappa coefficient, precision, and confusion matrix. The results revealed that SMDTR-CNN obtained the best overall accuracy (95.0%) and kappa coefficient (0.944) while also improving the precision of parking lot and resident samples by 1% and 4%, respectively.INDEX TERMS Deep learning, convolution neural network, urban built-up area, capsule network, model ensemble, high resolution remote sensing classification.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.