Security ontology can be used to build a shared knowledge model for an application domain to overcome the data heterogeneity issue, but it suffers from its own heterogeneity issue. Finding identical entities in two ontologies, i.e., ontology alignment, is a solution. It is important to select an effective similarity measure (SM) to distinguish heterogeneous entities. However, due to the complex semantic relationships among concepts, no SM is ensured to be effective in all alignment tasks. The aggregation of SMs so that their advantages and disadvantages complement each other directly affects the quality of alignments. In this work, we formally define this problem, discuss its challenges, and present a problem-specific genetic algorithm (GA) to effectively address it. We experimentally test our approach on bibliographic tracks provided by OAEI and five pairs of security ontologies. The results show that GA can effectively address different heterogeneous ontology-alignment tasks and determine high-quality security ontology alignments.
The Bai People have left behind a wealth of ancient texts that record their splendid civilization, unfortunately fewer and fewer people can read these texts in the present time. Therefore, it is of great practical value to design a model that can automatically recognize the Bai ancient (offset) texts. However, due to the expert knowledge involved in the annotation of ancient (offset) texts, and its limited scale, we consider that using handwritten Bai texts to help identify ancient (offset) Bai texts for handwritten Bai texts can be easily obtained and annotated. Essentially, this is a problem of domain adaptation, and some of the domain adaptation methods were transplanted to handle ancient (offset) Bai text recognition. Unfortunately, none of them succeeded in obtaining a high performance due to the fact that they do not solve the problem of how to separate the style and content information of an image. To address this, an information separation network (ISN) that can effectively separate content and style information and eventually classify with content features only, is proposed. Specifically, our network first divides the visual features into a style feature and a content feature by a separator, and ensures that the style feature contains only style and the content feature contains only content by cross-domain cross-reconstruction; thus, achieving the separation of style and content, and finally using only the content feature for classification. This greatly reduces the impact brought by cross-domain. The proposed method achieves leading results on five public datasets and a private one.
Conventional zero-shot learning aims to train a classifier on a training set (seen classes) to recognize instances of novel classes (unseen classes) by class-level semantic attributes. In generalized zero-shot learning (GZSL), the classifier needs to recognize both seen and unseen classes, which is a problem of extreme data imbalance. To solve this problem, feature generative methods have been proposed to make up for the lack of unseen classes. Current generative methods use class semantic attributes as the cues for synthetic visual features, which can be considered mapping of the semantic attribute to visual features. However, this mapping cannot effectively transfer knowledge learned from seen classes to unseen classes because the information in the semantic attributes and the information in visual features are asymmetric: semantic attributes contain key category description information, while visual features consist of visual information that cannot be represented by semantics. To this end, we propose a residual-prototype-generating network (RPGN) for GZSL that extracts the residual visual features from original visual features by an encoder–decoder and synthesizes the prototype visual features associated with semantic attributes by a disentangle regressor. Experimental results show that the proposed method achieves competitive results on four GZSL benchmark datasets with significant gains.
When talking about Bai nationality, people are impressed by its long history and the language it has created. However, since fewer people of the young generation learn the traditional language, the glorious Bai culture becomes less known, making understanding Bai characters difficult. Based on the highly precise character recognition model for Bai characters, the paper is aimed at helping people read books written in Bai characters so as to popularize the culture. To begin with, a data set is built with the support of Bai culture fans and experts. However, the data set is not large enough as knowledge in this respect is limited. This makes the deep learning model less accurate since it lacks sufficient data. The popular zero-shot learning (ZSL) is adopted to overcome the insufficiency of data sets. We use Chinese characters as the seen class, Bai characters as the unseen class, and the number of strokes as the attribute to construct the ZSL format data set. However, the existing ZSL methods ignore the character structure information, so a generation method based on variational autoencoder (VAE) is put forward, which can automatically capture the character structure information. Experimental results show that the method facilitates the recognition of Bai characters and makes it more precise.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.