Recently, a new extension of fuzzy sets, Pythagorean fuzzy sets (PFS), has attracted a lot of attention from scholars in various fields of research. Due to PFS’s powerfulness in modeling the imprecision of human perception in multicriteria decision‐making (MCDM) problems, this paper aims to extend the classical preference ranking organization method of enrichment evaluations (PROMETHEE) into the Pythagorean fuzzy environment. The proposed method takes not only the weights related to different criteria but also the preference relations as Pythagorean fuzzy numbers, therefore providing a broader range of choices for the decision‐maker to express their preferences. Five properties are put forward to regulate the designing of both intuitionistic and Pythagorean fuzzy PROMETHEE (PF‐PROMETHEE) preference functions. Furthermore two illustrative examples are given to demonstrate the detailed procedure of PF‐PROMETHEE, and comparisons are made to distinguish the differences among our proposed method, the classical PROMETHEE and intuitionistic PROMETHEE. The results show that PF‐PROMETHEE is effective, comprehensive, and applicable to a wide range of MCDM problems.
We use both reinforcement learning and deep learning to simultaneously extract entities and relations from unstructured texts. For reinforcement learning, we model the task as a two-step decision process. Deep learning is used to automatically capture the most important information from unstructured texts, which represent the state in the decision process. By designing the reward function per step, our proposed method can pass the information of entity extraction to relation extraction and obtain feedback in order to extract entities and relations simultaneously. Firstly, we use bidirectional LSTM to model the context information, which realizes preliminary entity extraction. On the basis of the extraction results, attention based method can represent the sentences that include target entity pair to generate the initial state in the decision process. Then we use Tree-LSTM to represent relation mentions to generate the transition state in the decision process. Finally, we employ Q-Learning algorithm to get control policy π in the two-step decision process. Experiments on ACE2005 demonstrate that our method attains better performance than the state-of-the-art method and gets a 2.4% increase in recall-score.
Co-occurrence information between words is the basis of training word embeddings; besides, Chinese characters are composed of subcharacters, words made up by the same characters or subcharacters usually have similar semantics, but this internal substructure information is usually neglected in popular models. In this paper, we propose a novel method for learning Chinese word embeddings, which takes full use of external co-occurrence context information and internal substructure information. We represent each word as a bag of subcharacter n-grams, and our model learns the vector representation corresponding to the word and its subcharacter n-grams. The final word embeddings are represented as the sum of these two kinds of vector representation, which makes the learned word embeddings can take into account both the internal structure information and external co-occurrence information possible. The experiments show that our method outperforms state-of-the-art performance on benchmarks.INDEX TERMS Chinese word embedding, subcharacter, n-gram, language model.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.