A great deal of operational information exists in the form of text. Therefore, extracting operational information from unstructured military text is of great significance for assisting command decision making and operations. Military relation extraction is one of the main tasks of military information extraction, which aims at identifying the relation between two named entities from unstructured military texts. However, the traditional methods of extracting military relations cannot easily resolve problems such as inadequate manual features and inaccurate Chinese word segmentation in military fields, failing to make full use of symmetrical entity relations in military texts. With our approach, based on the pre-trained language model, we present a Chinese military relation extraction method, which combines the bi-directional gate recurrent unit (BiGRU) and multi-head attention mechanism (MHATT). More specifically, the conceptual foundation of our method lies in constructing an embedding layer and combining word embedding with position embedding, based on the pre-trained language model; the output vectors of BiGRU neural networks are symmetrically spliced to learn the semantic features of context, and they fuse the multi-head attention mechanism to improve the ability of expressing semantic information. On the military text corpus that we have built, we conduct extensive experiments. We demonstrate the superiority of our method over the traditional non-attention model, attention model, and improved attention model, and the comprehensive evaluation value F1-score of the model is improved by about 4%.
Entity linking, a crucial task in the realm of natural language processing, aims to link entity mentions in a text to their corresponding entities in the knowledge base. While long documents provide abundant contextual information, facilitating feature extraction for entity identification and disambiguation, entity linking in Chinese short texts presents significant challenges. This study introduces an innovative approach to entity linking within Chinese short texts, combining multiple embedding representations. It integrates embedding representations from both entities and relations in the knowledge graph triples, as well as embedding representations from the descriptive text of entities and relations, to enhance the performance of entity linking. The method also incorporates external semantic supplements to strengthen the model’s feature learning capabilities. The Multi-Embedding Representation–Bidirectional Encoder Representation from Transformers–Bidirectional Gated Recurrent Unit (MER-BERT-BiGRU) neural network model is employed for embedding learning. The precision, recall, and F1 scores reached 89.73%, 92.18%, and 90.94% respectively, demonstrating the effectiveness of our approach.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.