Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval 2019
DOI: 10.1145/3331184.3331326
|View full text |Cite
|
Sign up to set email alerts
|

FAQ Retrieval using Query-Question Similarity and BERT-Based Query-Answer Relevance

Abstract: Frequently Asked Question (FAQ) retrieval is an important task where the objective is to retrieve an appropriate Question-Answer (QA) pair from a database based on a user's query. We propose a FAQ retrieval system that considers the similarity between a user's query and a question as well as the relevance between the query and an answer. Although a common approach to FAQ retrieval is to construct labeled data for training, it takes annotation costs. Therefore, we use a traditional unsupervised information retr… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
61
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
3
2
2

Relationship

0
7

Authors

Journals

citations
Cited by 100 publications
(79 citation statements)
references
References 10 publications
1
61
0
Order By: Relevance
“…SUC is a fundamental, multi-class, highly imbalanced textual classification problem. For example, it is widely used for intent (class) detection in goaloriented dialogue systems (Henderson et al, 2014;Bohus and Rudnicky, 2009), and for frequently asked question (FAQ) retrieval (Sakata et al, 2019;Gupta and Carvalho, 2019;Wang et al, 2017).…”
Section: Introductionmentioning
confidence: 99%
“…SUC is a fundamental, multi-class, highly imbalanced textual classification problem. For example, it is widely used for intent (class) detection in goaloriented dialogue systems (Henderson et al, 2014;Bohus and Rudnicky, 2009), and for frequently asked question (FAQ) retrieval (Sakata et al, 2019;Gupta and Carvalho, 2019;Wang et al, 2017).…”
Section: Introductionmentioning
confidence: 99%
“…Two sub‐networks receive two inputs respectively, convert them into a vector, and then calculate the distance between the two vectors by some distance metric. The analysis of the BERT model is also added in the paper, BERT is a model that has received widespread attention recently and has been successfully used in answer‐question application 67,101 . The paper compares the LSTM models, BERT model with the traditional non‐DL measures.…”
Section: Methodsmentioning
confidence: 99%
“…The model not only makes use of a simple context but also uses merging structured semantic information, which can provide rich semantics for language representation. Sakata et al 67 use BERT model to calculate the similarity between the user's query and answer. Their method has robust and high‐performance retrieval.…”
Section: Semantic Similarity Measuresmentioning
confidence: 99%
“…Transfer learning via large pre-trained Transformers [46]-the prominent case being BERT [7]-has lead to remarkable empirical successes on a range of NLP problems. The BERT approach to learn textual representations has also significantly improved the performance of neural models for several IR tasks [33,37,54,55,57], that for a long time struggled to outperform classic IR models [53]. In this work we use the no-CL BERT as a strong baseline for the conversation response ranking task.…”
Section: Related Workmentioning
confidence: 99%