Proceedings of the 30th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval 2007
DOI: 10.1145/1277741.1277802
|View full text |Cite
|
Sign up to set email alerts
|

Structured retrieval for question answering

Abstract: Bag-of-words retrieval is popular among Question Answering (QA) system developers, but it does not support constraint checking and ranking on the linguistic and semantic information of interest to the QA system. We present an approach to retrieval for QA, applying structured retrieval techniques to the types of text annotations that QA systems use. We demonstrate that the structured approach can retrieve more relevant results, more highly ranked, compared with bag-of-words, on a sentence retrieval task. We als… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
60
0

Year Published

2009
2009
2019
2019

Publication Types

Select...
5
2
2

Relationship

2
7

Authors

Journals

citations
Cited by 73 publications
(61 citation statements)
references
References 18 publications
1
60
0
Order By: Relevance
“…Most approaches to extracting the expected answer type perform some sort of syntactic analysis on the question (by chunking, shallow parsing, or probabilistic deep parsing) in order to find the question focus. Based on the question focus, the question word, and named entity classification, the expected answer type is then determined via semantic generalization using lexical semantic resources such as WordNet, either by manually defined mappings of WordNet hyponym subhierarchies to answer taxonomies (Harabagiu et al 2000; see also Section 3) or by feature-based classifiers resting on machine learning techniques (Li/Roth 2006) or statistical methods (Ittycheriah 2006 Nyberg et al (2005) and Bilotti et al (2007) try to achieve this goal by shallow semantic parsing whereas Harabagiu et al (2000) and Mollá/Gardiner (2004) transform the results of a syntactic parser into shallow logical forms (conjunctive predicate-argument structures). These approaches make use of publicly available probabilistic parsers trained on annotated corpora.…”
Section: Methodological Aspectsmentioning
confidence: 99%
“…Most approaches to extracting the expected answer type perform some sort of syntactic analysis on the question (by chunking, shallow parsing, or probabilistic deep parsing) in order to find the question focus. Based on the question focus, the question word, and named entity classification, the expected answer type is then determined via semantic generalization using lexical semantic resources such as WordNet, either by manually defined mappings of WordNet hyponym subhierarchies to answer taxonomies (Harabagiu et al 2000; see also Section 3) or by feature-based classifiers resting on machine learning techniques (Li/Roth 2006) or statistical methods (Ittycheriah 2006 Nyberg et al (2005) and Bilotti et al (2007) try to achieve this goal by shallow semantic parsing whereas Harabagiu et al (2000) and Mollá/Gardiner (2004) transform the results of a syntactic parser into shallow logical forms (conjunctive predicate-argument structures). These approaches make use of publicly available probabilistic parsers trained on annotated corpora.…”
Section: Methodological Aspectsmentioning
confidence: 99%
“…The version of Indri used throughout this thesis extends the notion of a field to include a parent pointer, through which a field can optionally point to another field within the same document [43,8]. This pointer provides a means of checking relationships between fields that do not enclose one another.…”
Section: Indri Search Enginementioning
confidence: 99%
“…The experiments presented here and originally published in [8], evaluate whether the structured retrieval approach can provide a better quality passage ranking when compared to a baseline retrieval approach consisting of keyterms drawn from question, with named entity support for the expected answer type, which is considered to be a strong baseline for QA.…”
Section: Structured Retrievalmentioning
confidence: 99%
“…Bilotti, et. al., use structured queries to retrieve text satisfying PropBank-style semantic constraints [3]. Their method is shown to be effective in certain cases, yet poor at combining evidence from bag-of-words and structured features and not robust to ranking partial matches.…”
Section: Related Workmentioning
confidence: 99%