Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural 2009
DOI: 10.3115/1690219.1690247
|View full text |Cite
|
Sign up to set email alerts
|

A graph-based semi-supervised learning for question-answering

Abstract: We present a graph-based semi-supervised learning for the question-answering (QA) task for ranking candidate sentences. Using textual entailment analysis, we obtain entailment scores between a natural language question posed by the user and the candidate sentences returned from search engine. The textual entailment between two sentences is assessed via features representing high-level attributes of the entailment problem such as sentence structure matching, question-type named-entity matching based on a questi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
24
0

Year Published

2011
2011
2020
2020

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 22 publications
(24 citation statements)
references
References 21 publications
0
24
0
Order By: Relevance
“…This component identifies and classifies the type of a given question among a set of predefined question types. The type of a question typically guides the search strategy at the phase of passage retrieval [2] or decides which types of named entities to extract at the phase of answer extraction [3]. To improve recall, one strategy Table 2: Features extracted from three phases of answer generation pipelined system: question analysis (S1), passage retrieval (S2), and answer extraction (S3).…”
Section: System Object Graphmentioning
confidence: 99%
See 1 more Smart Citation
“…This component identifies and classifies the type of a given question among a set of predefined question types. The type of a question typically guides the search strategy at the phase of passage retrieval [2] or decides which types of named entities to extract at the phase of answer extraction [3]. To improve recall, one strategy Table 2: Features extracted from three phases of answer generation pipelined system: question analysis (S1), passage retrieval (S2), and answer extraction (S3).…”
Section: System Object Graphmentioning
confidence: 99%
“…If we detect that they may represent the same concept or related concepts, their weights can be proportional to the similarity. Based on a result similarity function, we can replace w ki with Sim(R k , Ri) in Equation 3. In this paper, we define the similarity function as…”
Section: Relevance Votingmentioning
confidence: 99%
“…In order to formulate the semi-supervised truth discovery as an optimization problem, we choose the loss function based on studies on semi-supervised graph learning [18] [19][20], which have been widely used in many applications such as question answering [3] to image annotation [14].…”
Section: Problem Formulationmentioning
confidence: 99%
“…Using textual entailment analysis, we obtain entailment scores between a natural language question posed by the user and the candidate sentences returned from search engine [1]. The textual deduction between two sentences is assessed via features representing high-level attributes of the entailment problem such as sentence structure matching, question-type named-entity matching based on a question-classifier, etc.…”
Section: Introductionmentioning
confidence: 99%
“…A SSL to demonstrate that utilization of more unlabeled data points can improve the answer-ranking task of QA. The graph for labeled and unlabeled data using match scores of textual entailment features as similarity weights between data points [1]. A summarization method applied on the graph to make the computations feasible on large datasets.…”
Section: Introductionmentioning
confidence: 99%