2D nanoporous NiO nanosheets were synthesized from metal–organic frameworks as precursors using a facile method, which exhibited superior capacitive properties.
The primary task of few-shot relation extraction is to quickly learn the features of relation classes from a few labelled instances and predict the semantic relations between entity pairs in new instances. Most existing few-shot relation extraction methods do not fully utilize the relation information features in sentences, resulting in difficulties in improving the performance of relation classification. Some researchers have attempted to incorporate external information, but the results have been unsatisfactory when applied to different domains. In this paper, we propose a method that utilizes triple information for data augmentation, which can alleviate the issue of insufficient instances and possesses strong domain adaptation capabilities. Firstly, we extract relation and entity pairs from the instances in the support set, forming relation triple information. Next, the sentence information and relation triple information are encoded using the same sentence encoder. Then, we construct an interactive attention module to enable the query set instances to interact separately with the support set instances and relation triple instances. The module pays greater attention to highly interactive parts between instances and assigns them higher weights. Finally, we merge the interacted support set representation and relation triple representation. To our knowledge, we are the first to propose a method that utilizes triple information for data augmentation in relation extraction. In our experiments on the standard datasets FewRel1.0 and FewRel2.0 (domain adaptation), we observed substantial improvements without including external information.
Deep learning techniques have demonstrated significant advancements in the task of text classification. Regrettably, the majority of these techniques necessitate a substantial corpus of annotated data to achieve optimal performance. Meta-learning has yielded intriguing outcomes in few-shot learning tasks, showcasing its potential in advancing the field. However, the current meta-learning methodologies are susceptible to overfitting due to the mismatch between a small number of samples and the complexity of the model. To mitigate this concern, we propose a Prompt-based Graph Convolutional Adversarial (PGCA) meta-learning framework, aiming to improve the adaptability of complex models in a few-shot scenario. Firstly, leveraging prompt learning, we generate embedding representations that bridge the downstream tasks. Then, we design a meta-knowledge extractor based on a graph convolutional neural network (GCN) to capture inter-class dependencies through instance-level interactions. We also integrate the adversarial network architecture into a meta-learning framework to extend sample diversity through adversarial training and improve the ability of the model to adapt to new tasks. Specifically, we mitigate the impact of extreme samples by introducing external knowledge to construct a list of class prototype extensions. Finally, we conduct a series of experiments on four public datasets to demonstrate the effectiveness of our proposed method.
Few-shot relation extraction aims to identify and extract semantic relations between entity pairs using only a small number of annotated instances. Many recently proposed prototype-based methods have shown excellent performance. However, existing prototype-based methods ignore the hidden inter-instance interaction information between the support and query sets, leading to unreliable prototypes. In addition, the current optimization of the prototypical network only relies on cross-entropy loss. It is only concerned with the accuracy of the predicted probability for the correct label, ignoring the differences of other non-correct labels, which cannot account for relation discretization in semantic space. This paper proposes an attentional network of interaction information to obtain a more reliable relation prototype. Firstly, an inter-instance interaction information attention module is designed to mitigate prototype unreliability through interaction information between the support set and query set instances, utilizing category information hidden in the query set. Secondly, the similarity scalar, which is defined by the mixed features of the prototype and the relation and is added to the focal loss to improve the attention of hard examples. We conducted extensive experiments on two standard datasets and demonstrated the validity of our proposed model.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.