This paper studies the traditional target classification and recognition algorithm based on Histogram of Oriented Gradients (HOG) feature extraction and Support Vector Machine (SVM) classification and applies this algorithm to distributed artificial intelligence image recognition. Due to the huge number of images, the general detection speed cannot meet the requirements. We have improved the HOG feature extraction algorithm. Using principal component analysis (PCA) to perform dimensionality reduction operations on HOG features and doing distributed artificial intelligence image recognition experiments, the results show that the image detection efficiency is slightly improved, and the detection speed is also improved. This article analyzes the reason for these changes because PCA mainly uses the useful feature information in HOG features. The parallelization processing of HOG features on graphics processing unit (GPU) is studied. GPU is used for high parallel and high-density calculations, and the calculation of HOG features is very complicated. Using GPU for parallelization of HOG features can make the calculation speed of HOG features improved. We use image experiments for the parallelized HOG feature algorithm. Experimental simulations show that the speed of distributed artificial intelligence image recognition is greatly improved. By analyzing the existing digital image recognition methods, an improved BP neural network algorithm is proposed. Under the premise of ensuring accuracy, the recognition speed of digital images is accelerated, the time required for recognition is reduced, real-time performance is guaranteed, and the effectiveness of the algorithm is verified.
Authentication technology based on One-Time-Password (OTP) is being used in more important networks applications because of its higher security. But many current schemes based on OTP still use mathematic methods or from simple random origins to get passwords. These passwords often lack of good randomicity and are not able to ensure the security of the systems. In the paper, a new authentication scheme based on OTP is presented. The scheme generates random numbers quickly by physical methods and applies them in aspects of the whole authentication process. It can guarantee the dynamic and secure property of passwords. Therefore, it can defense many attacks of human sources and is fit for the use of fields which need high security guarantee like finance systems and stock exchange systems.
Online users are more willing to express their opinions and emotions on social platforms in online public opinion events, through combing texts and images rather than texts alone. Facing the change of emotional expressions, it is extremely vital to recognize the sentiments of online users in online public opinion events in an appropriate way. In this paper, we propose a novel Deep Neural Networks (DNNs) model to recognize online users' sentiments in online public opinion events by analyzing the sentiments of texts and attached images together. Also, we compare two fusion strategies, feature‐level fusion and decision‐level fusion, to combine the affective information from texts and images. In feature‐level fusion, fine‐tuned Convolutional Neural Networks (CNNs) and Bidirectional Long Short‐Term Memory (BiLSTMs) are employed to extract visual and textual features respectively. Then the features are concatenated and fed to a DNNs system to complete classification. In decision‐level fusion, a rule is employed to fuse the output from unimodality, generating final predicted labels. Experimental results showed that the proposed DNNs multimodal model achieved better performance than unimodal sentiment recognition model. In fusion strategy, the feature‐level fusion performed better in our experiments.
Intangible Cultural Heritage (ICH) witnesses human creativity and wisdom in long histories, composed of a variety of immaterial manifestations. The rapid development of digital technologies accelerates the record of ICH, generating a sheer number of heterogenous data but in a state of fragmentation. To resolve that, existing studies mainly adopt approaches of knowledge graphs (KGs) which can provide rich knowledge representation. However, most KGs are text-based and text-derived, and incapable to give related images and empower downstream multimodal tasks, which is also unbeneficial for the public to establish the visual perception and comprehend ICH completely especially when they do not have the related ICH knowledge. Hence, aimed at that, we propose to, taking the Chinese nation-level ICH list as an example, construct a large-scale and comprehensive Multimodal Knowledge Graph (CICHMKG) combining text and image entities from multiple data sources and give a practical construction framework. Additionally, in this paper, to select representative images for ICH entities, we propose a method composed of the denoising algorithm (CNIFA) and a series of criteria, utilizing global and local visual features of images and textual features of captions. Extensive empirical experiments demonstrate its effectiveness. Lastly, we construct the CICHMKG, consisting of 1,774,005 triples, and visualize it to facilitate the interactions and help the public dive into ICH deeply.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.