2002
DOI: 10.1109/tpds.2002.1011413
|View full text |Cite
|
Sign up to set email alerts
|

Performance analysis of a distributed question/answering system

Abstract: AbstractÐThe problem of question/answering (Q/A) is to find answers to open-domain questions by searching large collections of documents. Unlike information retrieval systems very common today in the form of Internet search engines, Q/A systems do not retrieve documents, but instead provide short, relevant answers located in small fragments of text. This enhanced functionality comes with a price: Q/A systems are significantly slower and require more hardware resources than information retrieval systems. This p… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
7
0

Year Published

2003
2003
2012
2012

Publication Types

Select...
5
3
2

Relationship

0
10

Authors

Journals

citations
Cited by 34 publications
(12 citation statements)
references
References 25 publications
0
7
0
Order By: Relevance
“…This design follows previous proposals for complex search engines [24], where each node is able to compute queries autonomously. We illustrate an adaptation of the previously described architecture to a question answering (QA) system with three computing blocks (question processing, passage retrieval, and answer extraction) and its corresponding local caching pools in Figure 1.…”
Section: Related Workmentioning
confidence: 99%
“…This design follows previous proposals for complex search engines [24], where each node is able to compute queries autonomously. We illustrate an adaptation of the previously described architecture to a question answering (QA) system with three computing blocks (question processing, passage retrieval, and answer extraction) and its corresponding local caching pools in Figure 1.…”
Section: Related Workmentioning
confidence: 99%
“…I/O operations in real systems can be either synchronous or asynchronous. It is assumed in our mechanism that all I/O operations are synchronous, because many I/O-intensive parallel applications issue synchronous read/write operations [Surdeanu et al 2002;Uysal et al 1997]. This assumption is conservative in the sense that it underestimates load balancing benefits (i.e., this assumption causes a number of undesired migrations with negative impact).…”
Section: Predicting Response Timementioning
confidence: 99%
“…The algorithm stores a history record of the CPU and I/O time from previous queries and applies this record to estimate the fraction of CPU and I/O of the next computing block of the current query. The cost to compute the query q in node i is a weighted sum of the node load (Load and W CP U (q) ) [12]. We call this combined cost…”
Section: Load Balancingmentioning
confidence: 99%