2005
DOI: 10.1017/s1351324905003955
|View full text |Cite
|
Sign up to set email alerts
|

Learning question classifiers: the role of semantic information

Abstract: Link to this article: http://journals.cambridge.org/abstract_S1351324905003955How to cite this article: XIN LI and DAN ROTH (2006). Learning question classiers: the role of semantic information. AbstractTo respond correctly to a free form factual question given a large collection of text data, one needs to understand the question to a level that allows determining some of the constraints the question imposes on a possible answer. These constraints may include a semantic classification of the sought after answ… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
186
0
1

Year Published

2006
2006
2019
2019

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 211 publications
(188 citation statements)
references
References 21 publications
(33 reference statements)
1
186
0
1
Order By: Relevance
“…These are categorized according to different taxonomies of different grains. We consider the coarse grained classification scheme described in [16,17]: Abbreviations, Descriptions (e.g. definition and manner ), Entity (e.g.…”
Section: Semantic Applications Of Parse Tree Kernelsmentioning
confidence: 99%
See 1 more Smart Citation
“…These are categorized according to different taxonomies of different grains. We consider the coarse grained classification scheme described in [16,17]: Abbreviations, Descriptions (e.g. definition and manner ), Entity (e.g.…”
Section: Semantic Applications Of Parse Tree Kernelsmentioning
confidence: 99%
“…As we adopted the question taxonomy known as coarse grained introduced in Section 4, we can compare with literature results, e.g. [16,17].…”
Section: Experiments On Question Classificationmentioning
confidence: 99%
“…Most approaches to extracting the expected answer type perform some sort of syntactic analysis on the question (by chunking, shallow parsing, or probabilistic deep parsing) in order to find the question focus. Based on the question focus, the question word, and named entity classification, the expected answer type is then determined via semantic generalization using lexical semantic resources such as WordNet, either by manually defined mappings of WordNet hyponym subhierarchies to answer taxonomies (Harabagiu et al 2000; see also Section 3) or by feature-based classifiers resting on machine learning techniques (Li/Roth 2006) or statistical methods (Ittycheriah 2006 Nyberg et al (2005) and Bilotti et al (2007) try to achieve this goal by shallow semantic parsing whereas Harabagiu et al (2000) and Mollá/Gardiner (2004) transform the results of a syntactic parser into shallow logical forms (conjunctive predicate-argument structures). These approaches make use of publicly available probabilistic parsers trained on annotated corpora.…”
Section: Methodological Aspectsmentioning
confidence: 99%
“…For instance, the QA system of Bouma et al (2005) uses semantic relations extracted from the Dutch EuroWordNet (Vossen 1998). Applications of the Princeton WordNet to expected answer type classification are described in Li/Roth (2006), where a classifier based on machine learning techniques is applied to question features, and in Harabagiu et al (2000), where WordNet subhierarchies are manually linked to an answer taxonomy. Moreover, some embedded LRs used for QA are linked to WordNet-type resources, either for enriching a syntactic lexicon with semantic information (Crouch/King 2005) or for supplying a semantic lexicon with fallback information (Osswald 2004).…”
Section: Non-embedded Lexical Resourcesmentioning
confidence: 99%
“…It is a dominant type in question answering system. Li and Roth (2006) find Table 1. Question Ontology that the distribution of what-type questions over the semantic classes is quite diverse, and they are more difficult to be classified than other types.…”
Section: Introductionmentioning
confidence: 99%