2021
DOI: 10.1007/978-3-030-83527-9_27
|View full text |Cite
|
Sign up to set email alerts
|

Using BERT Encoding and Sentence-Level Language Model for Sentence Ordering

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(2 citation statements)
references
References 28 publications
0
2
0
Order By: Relevance
“…(4) Most of the existing pretext tasks transfer well across modalities. For instance, Masked Language Modelling (MLM) in the text domain has been applied to audio and image, e.g., Masked Acoustic Modelling [180], [200], Masked Image Region Prediction [190], while both Sentence Ordering Modelling (SOM) [201] in text domain and Frame Ordering Modelling (FOM) [116] in video domain share the same idea. We will further discuss the pretext tasks for multimodal Transformer pretraining in the follows.…”
Section: Task-agnostic Multimodal Pretrainingmentioning
confidence: 99%
“…(4) Most of the existing pretext tasks transfer well across modalities. For instance, Masked Language Modelling (MLM) in the text domain has been applied to audio and image, e.g., Masked Acoustic Modelling [180], [200], Masked Image Region Prediction [190], while both Sentence Ordering Modelling (SOM) [201] in text domain and Frame Ordering Modelling (FOM) [116] in video domain share the same idea. We will further discuss the pretext tasks for multimodal Transformer pretraining in the follows.…”
Section: Task-agnostic Multimodal Pretrainingmentioning
confidence: 99%
“…In this study, the encoding part has been used for text feature extraction in order to turn the sentence into its respective vector [57].…”
Section: The Second Strategy: Distilbert Language Model (Transformers...mentioning
confidence: 99%