2020
DOI: 10.48550/arxiv.2002.02925
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

BERT-of-Theseus: Compressing BERT by Progressive Module Replacing

Abstract: In this paper, we propose a novel model compression approach to effectively compress BERT by progressive module replacing. Our approach first divides the original BERT into several modules and builds their compact substitutes. Then, we randomly replace the original modules with their substitutes to train the compact modules to mimic the behavior of the original modules. We progressively increase the probability of replacement through the training. In this way, our approach brings a deeper level of interaction … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
25
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 21 publications
(25 citation statements)
references
References 24 publications
0
25
0
Order By: Relevance
“…CompressingBERT [50] Quantization Q-BERT [150], Q8BERT [204] Parameter Sharing ALBERT [91] Distillation DistilBERT [146], TinyBERT [74], MiniLM [188] Module Replacing BERT-of-Theseus [196] Figure 3: Taxonomy of PTMs with Representative Examples Sentence Order Prediction (SOP) To better model intersentence coherence, ALBERT [91] replaces the NSP loss with a sentence order prediction (SOP) loss. As conjectured in Lan et al [91], NSP conflates topic prediction and coherence prediction in a single task.…”
Section: Model Pruningmentioning
confidence: 99%
See 2 more Smart Citations
“…CompressingBERT [50] Quantization Q-BERT [150], Q8BERT [204] Parameter Sharing ALBERT [91] Distillation DistilBERT [146], TinyBERT [74], MiniLM [188] Module Replacing BERT-of-Theseus [196] Figure 3: Taxonomy of PTMs with Representative Examples Sentence Order Prediction (SOP) To better model intersentence coherence, ALBERT [91] replaces the NSP loss with a sentence order prediction (SOP) loss. As conjectured in Lan et al [91], NSP conflates topic prediction and coherence prediction in a single task.…”
Section: Model Pruningmentioning
confidence: 99%
“…Module replacing is an interesting and simple way to reduce the model size, which replaces the large modules of original PTMs with more compact substitutes. Xu et al [196] proposed Theseus Compression motivated by a famous thought experiment called "Ship of Theseus", which progressively substitutes modules from the source model with modules of fewer parameters. Different from KD, Theseus Compression only requires one task-specific loss function.…”
Section: Module Replacingmentioning
confidence: 99%
See 1 more Smart Citation
“…In addition, remarkable advances have been made in knowledge distillation for language model compression (i.e. , BERT [13]), and these works show that mimicking the distribution of self-attention and intermediate representations of transformer blocks increases performances [52,27,58,69] for downstream tasks. In particular, in the transformer-based language model distillation, Dis-tillBERT [52] proposes to train the small BERT by mimicking the Teacher's output probability of masked language prediction and the embedding features.…”
Section: Related Workmentioning
confidence: 99%
“…MobileBERT takes approximately 0.6 seconds to classify a text sequence on a Google Pixel 3 smartphone. And, on the GLUE benchmark, which consists of 9 natural language understanding (NLU) datasets [16], MobileBERT achieves higher accuracy than other efficient networks such as DistilBERT [17], PKD [18], and several others [19,20,21,22]. To achieve this, MobileBERT introduced two concepts into their NLP self-attention network that are already in widespread use in CV neural networks:…”
Section: What Has CV Research Already Taught Nlp Research About Effic...mentioning
confidence: 99%