Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP) 2014
DOI: 10.3115/v1/d14-1070
|View full text |Cite
|
Sign up to set email alerts
|

A Neural Network for Factoid Question Answering over Paragraphs

Abstract: Text classification methods for tasks like factoid question answering typically use manually defined string matching rules or bag of words representations. These methods are ineffective when question text contains very few individual words (e.g., named entities) that are indicative of the answer. We introduce a recursive neural network (rnn) model that can reason over such input by modeling textual compositionality. We apply our model, qanta, to a dataset of questions from a trivia competition called quiz bowl… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
216
0

Year Published

2015
2015
2020
2020

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 271 publications
(216 citation statements)
references
References 16 publications
0
216
0
Order By: Relevance
“…It is shown that the proximity in this numeric space actually embodies algebraic semantic relationship, such as "Queen (Mikolov et al, 2013). As demonstrated in previous work, this numeric representation of words has led to big improvements in many NLP tasks such as machine translation (Sutskever et al, 2014), question answering (Iyyer et al, 2014) and document ranking (Shen et al, 2014). Radical embedding is similar to word embedding except that the embedding is at radical level.…”
Section: Deep Network With Radical Embeddingsmentioning
confidence: 95%
“…It is shown that the proximity in this numeric space actually embodies algebraic semantic relationship, such as "Queen (Mikolov et al, 2013). As demonstrated in previous work, this numeric representation of words has led to big improvements in many NLP tasks such as machine translation (Sutskever et al, 2014), question answering (Iyyer et al, 2014) and document ranking (Shen et al, 2014). Radical embedding is similar to word embedding except that the embedding is at radical level.…”
Section: Deep Network With Radical Embeddingsmentioning
confidence: 95%
“…Recurrent Neural Networks have been recommended for processing sequences [10], while Recursive Neural Networks are collections of recurrent networks that can address trees [6]. Another application uses Recurrent Neural Networks for question answering systems about paragraphs [15], and a Neural Responding Machine (NRM) has been proposed for a Short-Text Conversation generator, which is based on neural networks [21]. In addition, Recurrent Neural Networks models offered state-of-the-art performance for sentiment classification [13], target-dependent sentiment classification [25] and question answering [15].…”
Section: Recurrent Neural Network Approachesmentioning
confidence: 99%
“…Another application uses Recurrent Neural Networks for question answering systems about paragraphs [15], and a Neural Responding Machine (NRM) has been proposed for a Short-Text Conversation generator, which is based on neural networks [21]. In addition, Recurrent Neural Networks models offered state-of-the-art performance for sentiment classification [13], target-dependent sentiment classification [25] and question answering [15]. Adaptive Recurrent Neural Network (AdaRNN) is introduced for sentiment classification in Twitter promotions based on the context and syntactic relationships between words [2].…”
Section: Recurrent Neural Network Approachesmentioning
confidence: 99%
See 1 more Smart Citation
“…Bordes et al (2014) jointly embedded words and knowledge base constituents into same vector space to measure the relevance of question and answer sentences in that space. Iyyer et al (2014) worked on the quiz bowl task, which is an application of recursive neural networks for factoid question answering over paragraphs. The correct answers are identified from a relatively small fixed set of candidate answers which are in the form of entities instead of sentences.…”
mentioning
confidence: 99%