Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conferen 2019
DOI: 10.18653/v1/d19-1601
|View full text |Cite
|
Sign up to set email alerts
|

MICRON: Multigranular Interaction for Contextualizing RepresentatiON in Non-factoid Question Answering

Abstract: This paper studies the problem of non-factoid question answering, where the answer may span over multiple sentences. Existing solutions can be categorized into representationand interaction-focused approaches. We combine their complementary strength, by a hybrid approach allowing multi-granular interactions, but represented at word level, enabling an easy integration with strong word-level signals. Specifically, we propose MICRON: Multigranular Interaction for Contextualizing RepresentatiON, a novel approach w… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
3
2

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(4 citation statements)
references
References 15 publications
0
4
0
Order By: Relevance
“…Current models seemingly match similar keywords or phrases of the questions and answers, often without truly understanding them in context. (Rücklé et al, 2019b), ‡ is the MICRON model (Han et al, 2019), is the BERT model in (Ma et al, 2019), and is MV-DASE (Poerner and Schütze, 2019). Table 5: A mistake of MultiCQA RBa-lg (zero-shot transfer) on AskUbuntu.…”
Section: Discussionmentioning
confidence: 99%
“…Current models seemingly match similar keywords or phrases of the questions and answers, often without truly understanding them in context. (Rücklé et al, 2019b), ‡ is the MICRON model (Han et al, 2019), is the BERT model in (Ma et al, 2019), and is MV-DASE (Poerner and Schütze, 2019). Table 5: A mistake of MultiCQA RBa-lg (zero-shot transfer) on AskUbuntu.…”
Section: Discussionmentioning
confidence: 99%
“…The IR baselines are the same as in §4.1 (TF*IDF for LAS, BM25 for WikiPassageQA and InsuranceQA, and a search engine ranking for SemEval17-the official challenge baseline). (Rücklé et al, 2019b), ‡ is the MICRON model (Han et al, 2019), is the BERT model in (Ma et al, 2019), and is MV-DASE (Poerner and Schütze, 2019).…”
Section: Modelsmentioning
confidence: 99%
“…However, such approach also targets on short answers, not with varying length. Beyond factoid questions, retrieving a paragraph of answering why-or how-questions has been studied (Ruckle, Moosavi, and Gurevych 2019;Han et al 2019;Tan et al 2015). While these approaches can deal with longer answers, they assume that pre-segmented paragraphs are available, which is not available in our problem setting.…”
Section: Text-based Extractive Qamentioning
confidence: 99%