Proceedings of the 14th ACM International Conference on Information and Knowledge Management 2005
DOI: 10.1145/1099554.1099571
|View full text |Cite
|
Sign up to set email alerts
|

Retrieving answers from frequently asked questions pages on the web

Abstract: We address the task of answering natural language questions by using the large number of Frequently Asked Questions (FAQ) pages available on the web. The task involves three steps: (1) fetching FAQ pages from the web; (2) automatic extraction of question/answer (Q/A) pairs from the collected pages; and (3) answering users' questions by retrieving appropriate Q/A pairs. We discuss our solutions for each of the three tasks, and give detailed evaluation results on a collected corpus of about 3.6Gb of text data (2… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
52
0
1

Year Published

2009
2009
2020
2020

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 81 publications
(54 citation statements)
references
References 33 publications
1
52
0
1
Order By: Relevance
“…For example, [15] reports on an attempt to answer open-domain questions asked by users on Web forums, by searching the answer in a large but limited set of FAQ QA pairs collected in a previous step. The authors use simple vector-space retrieval models over the user's question treated as a query and the FAQ question, answer, and source document indexed as fields making up the item to be returned.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…For example, [15] reports on an attempt to answer open-domain questions asked by users on Web forums, by searching the answer in a large but limited set of FAQ QA pairs collected in a previous step. The authors use simple vector-space retrieval models over the user's question treated as a query and the FAQ question, answer, and source document indexed as fields making up the item to be returned.…”
Section: Related Workmentioning
confidence: 99%
“…The potential discrepancy between a user's actual information need and what may be inferred from its expression in a textual query is a pervasive problem in information retrieval [21]. As an example, to assess how well suited to a question the answers retrieved by their system were, the authors of [15] had raters "back-generate" a possible information need behind each question before judging the quality of the answers provided by the system. Those researchers point out that for some questions the assessors were unable to reconstruct the original information need, which means they were unable to judge the quality of the answers.…”
Section: Difficulty Of the Taskmentioning
confidence: 99%
“…Answers and Quora. Previous work [9,10,24] studied effectively finding previously answered questions that are relevant to the new question asked by the user. Different retrieval models have been proposed to calculate the similarity between questions.…”
Section: Complex Querymentioning
confidence: 99%
“…Some recent works [15,17] used more advanced translation-based approaches to retrieve FAQ data. [11] and [10] mined the FAQ data from the Web and implemented their own retrieval systems.…”
Section: Related Workmentioning
confidence: 99%