Proceedings of the Ninth International Workshop on Parsing Technology - Parsing '05 2005
DOI: 10.3115/1654494.1654514
|View full text |Cite
|
Sign up to set email alerts
|

Statistical shallow semantic parsing despite little training data

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0

Year Published

2006
2006
2019
2019

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 12 publications
(10 citation statements)
references
References 16 publications
0
10
0
Order By: Relevance
“…The experimental results evidenced that these semantic classes just by themselves are very helpful to question classification, resulting in an accuracy of 93.2%. Moreover, these semantic classes can also be used to augment the training set, as demonstrated by (Bhagat et al 2005). With all semantic features combined, these authors achieved an accuracy of 94.0% which, to date, outperforms every other question classifier on the standard training set of Li and Roth, for coarse-grained classification.…”
Section: Machine Learning-based Question Classifiersmentioning
confidence: 94%
“…The experimental results evidenced that these semantic classes just by themselves are very helpful to question classification, resulting in an accuracy of 93.2%. Moreover, these semantic classes can also be used to augment the training set, as demonstrated by (Bhagat et al 2005). With all semantic features combined, these authors achieved an accuracy of 94.0% which, to date, outperforms every other question classifier on the standard training set of Li and Roth, for coarse-grained classification.…”
Section: Machine Learning-based Question Classifiersmentioning
confidence: 94%
“…ELIZA can be seen as the predecessor of nowadays chatbots, such as Cleverbot 2 and ALICE 3 , which try to simulate human conversation. Many of these systems target to pass the Turing test and win the Lobner prize 4 . In what concerns LUNAR, this natural language interface capable of answering questions about moon rocks also deserves some detach, as it started the line of research responsible for a panoply of Natural Language Interfaces with Databases (NLIDB), in the 80s and 90s, which ended up converging in question/answering (QA) systems.…”
Section: Natural Language Understandingmentioning
confidence: 99%
“…Sub-symbolic techniques, as in so many other research fields, are currently being widely applied to NLU. For instance, Bhagat [4] treats NLU as a classification problem, i.e., his final goal is to be able to classify an utterance. Thus, Maximum Entropy and Support Vector Machines are used in some of their experiments.…”
Section: Natural Language Understandingmentioning
confidence: 99%
“…In what concerns sub-symbolic NLU, some systems receive text as input [5] and many are dealing with transcriptions from an Automatic Speech Recognizer [9]. In fact, considering speech understanding, the new trends considers NLU from a machine learning point of view.…”
Section: Related Workmentioning
confidence: 99%