Answer selection is a crucial subtask in the question answering (QA) system. Conventional avenues for this task mainly concentrate on developing linguistic tools that are limited in both performance and practicability. Answer selection approaches based on deep learning have been well investigated with the tremendous success of deep learning in natural language processing. However, the traditional neural networks employed in existing answer selection models, i.e., recursive neural network or convolutional neural network, typically suffer from obtaining the global text information due to their operating mechanisms. The recent Transformer neural network is considered to be good at extracting the global information by employing only self-attention mechanism. Thus, in this paper, we design a Transformer-based neural network for answer selection, where we deploy a bidirectional long short-term memory (BiLSTM) behind the Transformer to acquire both global information and sequential features in the question or answer sentence. Different from the original Transformer, our Transformer-based network focuses on sentence embedding rather than the seq2seq task. In addition, we employ a BiLSTM rather than utilizing the position encoding to incorporate sequential features as the universal Transformer does. Furthermore, we apply three aggregated strategies to generate sentence embeddings for question and answer, i.e., the weighted mean pooling, the max pooling, and the attentive pooling, leading to three corresponding Transformer-based models, i.e., QA-TF WP , QA-TF MP , and QA-TF AP , respectively. Finally, we evaluate our proposals on a popular QA dataset WikiQA. The experimental results demonstrate that our proposed Transformer-based answer selection models can produce a better performance compared with several competitive baselines. In detail, our best model outperforms the state-of-the-art baseline by up to 2.37%, 2.83%, and 3.79% in terms of MAP, MRR, and accuracy, respectively.
As a basic task of NLP, named entity recognition has always been the focus of researchers. At the same time, the word vector representation which is a necessary part of many named entity recognition neural network models has been more and more important. Recently, the emergence of a new type word representation, BERT, has greatly promoted many NLP tasks. In this paper, we will use Bert to train Chinese character embedding and connect it with Chinese radical-level representations, and put it into the BGRU-CRF model. We have achieved good results in Chinese data set through a series of experiments.
Query suggestions help users refine their queries after they input an initial query. Previous work mainly concentrated on similarity-based and context-based query suggestion approaches. However, models that focus on adapting to a specific user (personalization) can help to improve the probability of the user being satisfied. In this paper, we propose a personalized query suggestion model based on users’ search behavior (UB model), where we inject relevance between queries and users’ search behavior into a basic probabilistic model. For the relevance between queries, we consider their semantical similarity and co-occurrence which indicates the behavior information from other users in web search. Regarding the current user’s preference to a query, we combine the user’s short-term and long-term search behavior in a linear fashion and deal with the data sparse problem with Bayesian probabilistic matrix factorization (BPMF). In particular, we also investigate the impact of different personalization strategies (the combination of the user’s short-term and long-term search behavior) on the performance of query suggestion reranking. We quantify the improvement of our proposed UB model against a state-of-the-art baseline using the public AOL query logs and show that it beats the baseline in terms of metrics used in query suggestion reranking. The experimental results show that: (i) for personalized ranking, users’ behavioral information helps to improve query suggestion effectiveness; and (ii) given a query, merging information inferred from the short-term and long-term search behavior of a particular user can result in a better performance than both plain approaches.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.