Proceedings of the Graph-Based Methods for Natural Language Processing (TextGraphs) 2020
DOI: 10.18653/v1/2020.textgraphs-1.14
|View full text |Cite
|
Sign up to set email alerts
|

Red Dragon AI at TextGraphs 2020 Shared Task : LIT : LSTM-Interleaved Transformer for Multi-Hop Explanation Ranking

Abstract: Explainable question answering for science questions is a challenging task that requires multihop inference over a large set of fact sentences. To counter the limitations of methods that view each query-document pair in isolation, we propose the LSTM-Interleaved Transformer which incorporates cross-document interactions for improved multi-hop ranking. The LIT architecture can leverage prior ranking positions in the re-ranking setting. Our model is competitive on the current leaderboard for the TextGraphs 2020 … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 15 publications
0
2
0
Order By: Relevance
“…Model compression is one approach to mitigating this issue. Various methods for compressing large-scale language models have been proposed in the last two years [8][9][10][11][12][13].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Model compression is one approach to mitigating this issue. Various methods for compressing large-scale language models have been proposed in the last two years [8][9][10][11][12][13].…”
Section: Introductionmentioning
confidence: 99%
“…Model compression is one approach to mitigating this issue. Various methods for compressing large-scale language models have been proposed in the last two years [8][9][10][11][12][13]. From the perspective of downstream tasks, current model compression methods could be classified as task-agnostic and task-specific compression.…”
Section: Introductionmentioning
confidence: 99%