Proceedings of the Conference Recent Advances in Natural Language Processing - Deep Learning for Natural Language Processing Me 2021
DOI: 10.26615/978-954-452-072-4_044
|View full text |Cite
|
Sign up to set email alerts
|

Decoupled Transformer for Scalable Inference in Open-domain Question Answering

Abstract: Large transformer models, such as BERT, achieve state-of-the-art results in machine reading comprehension (MRC) for open-domain question answering (QA). However, transformers have a high computational cost for inference which makes them hard to apply to online QA systems for applications like voice assistants. To reduce computational cost and latency, we propose decoupling the transformer MRC model into input-component and crosscomponent. The decoupling allows for part of the representation computation to be p… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2021
2021
2021
2021

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 3 publications
0
1
0
Order By: Relevance
“…Similar to our approach of re-ranker on top of a shared retriever, PreTTR (MacAvaney et al, 2020) pre-computed term representations for all documents, and used these to run only the upper layers of a transformer reranker model. Decoupled Transformer (Elfdaeel and Peshterliev, 2021) also shares the lower layers of a transformer encoder to serve as a reranker, using the upper layers as a reader, but lacks retriever and decoder, and focuses on computationally efficient reranking.…”
Section: Related Workmentioning
confidence: 99%
“…Similar to our approach of re-ranker on top of a shared retriever, PreTTR (MacAvaney et al, 2020) pre-computed term representations for all documents, and used these to run only the upper layers of a transformer reranker model. Decoupled Transformer (Elfdaeel and Peshterliev, 2021) also shares the lower layers of a transformer encoder to serve as a reranker, using the upper layers as a reader, but lacks retriever and decoder, and focuses on computationally efficient reranking.…”
Section: Related Workmentioning
confidence: 99%