Image matching is a basic task in three-dimensional reconstruction, which, in recent years, has attracted extensive attention in academic and industrial circles. However, when dealing with large-scale image datasets, these methods have low accuracy and slow speeds. To improve the effectiveness of modern image matching methods, this paper proposes an image matching method for 3D reconstruction. The proposed method can obtain high matching accuracy through hash index in a very short amount of time. The core of hash matching includes two parts: creating the hash table and hash index. The former is used to encode local feature descriptors into hash codes, and the latter is used to search candidates for query feature points. In addition, the proposed method is extremely robust to image scaling and transformation by using various verifications. A comprehensive experiment was carried out using several challenging datasets to evaluate the performance of hash matching. Experimental results show that the HashMatch presents excellent results compared to the state-of-the-art methods in both computational efficiency and matching accuracy.
Traditional recognition methods are simple to extract features and need to be manually extracted with high complexity and unstable accuracy. The expression recognition method of deep learning still has the problems of poor network representation ability and low recognition rate. In order to fully represent the complex texture and edge features of expression images, a deep learning method of expression recognition based on Gabor representation combined with PCNN was proposed. Firstly, different Gabor representations are obtained through a set of Gabor filter banks with different proportions and directions, and the corresponding convolutional neural network model is trained to generate G-CNNs. Then, the Pulse Coupled Neural Network (PCNN) was introduced to fuse the different outputs of G-CNNs. Experiments in CK+ and JAFFE databases show that the average recognition rates of this method obtained 94.87% and 96.91%. Compared with other methods, the proposed method achieves a better recognition effect.
Shadow removal from a single image is an ill‐posed problem because shadow generation is affected by the complex interactions of geometry, albedo, and illumination. Most recent deep learning‐based methods try to directly estimate the mapping between the non‐shadow and shadow image pairs to predict the shadow‐free image. However, they are not very effective for shadow images with complex shadows or messy backgrounds. In this paper, we propose a novel end‐to‐end depth‐aware shadow removal method without using depth images, which estimates depth information from RGB images and leverages the depth feature as guidance to enhance shadow removal and refinement. The proposed framework consists of three components, including depth prediction, shadow removal, and boundary refinement. First, the depth prediction module is used to predict the corresponding depth map of the input shadow image. Then, we propose a new generative adversarial network (GAN) method integrated with depth information to remove shadows in the RGB image. Finally, we propose an effective boundary refinement framework to alleviate the artifact around boundaries after shadow removal by depth cues. We conduct experiments on several public datasets and real‐world shadow images. The experimental results demonstrate the efficiency of the proposed method and superior performance against state‐of‐the‐art methods.
Traditional recognition methods are simple to extract features and need to be manually extracted with high complexity and unstable accuracy. The expression recognition method of deep learning still has the problems of poor network representation ability and low recognition rate. In order to fully represent the complex texture and edge features of expression images, a deep learning method of expression recognition based on Gabor representation combined with PCNN was proposed. Firstly, different Gabor representations are obtained through a set of Gabor filter banks with different proportions and directions, and the corresponding convolutional neural network model is trained to generate G-CNNs. Then, the Pulse Coupled Neural Network (PCNN) was introduced to fuse the different outputs of G-CNNs. Experiments in CK+ and JAFFE databases show that the average recognition rates of this method obtained 94.87% and 96.91%. Compared with other methods, the proposed method achieves a better recognition effect.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.