2022
DOI: 10.1007/s10115-022-01698-1
|View full text |Cite
|
Sign up to set email alerts
|

Adapter-based fine-tuning of pre-trained multilingual language models for code-mixed and code-switched text classification

Abstract: Code-mixing and code-switching (CMCS) are frequent features in online conversations. Classification of such text is challenging if one of the languages is low-resourced. Fine-tuning pre-trained multilingual language models (PMLMs) is a promising avenue for code-mixed text classification. In this paper, we explore adapter-based fine-tuning of PMLMs for CMCS text classification. We introduce sequential and parallel stacking of adapters, continuous fine-tuning of adapters, and training adapters without freezing t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
13
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7
1

Relationship

1
7

Authors

Journals

citations
Cited by 13 publications
(14 citation statements)
references
References 26 publications
(45 reference statements)
1
13
0
Order By: Relevance
“…Adapters can be generally categorized into two categories: task adapters, which learn task-specific representations, and language adapters, which learn languagespecific representations [27]. Typically, language adapters are used in conjunction with task adapters [3,28]. Extensive research has been conducted on adapters as a parameter-efficient fine-tuning method for various tasks.…”
Section: Adapter-based Fine-tuning Of Plmsmentioning
confidence: 99%
See 1 more Smart Citation
“…Adapters can be generally categorized into two categories: task adapters, which learn task-specific representations, and language adapters, which learn languagespecific representations [27]. Typically, language adapters are used in conjunction with task adapters [3,28]. Extensive research has been conducted on adapters as a parameter-efficient fine-tuning method for various tasks.…”
Section: Adapter-based Fine-tuning Of Plmsmentioning
confidence: 99%
“…Despite exhibiting success over full fine-tuning for monolingual text, prompt-based learning of PLMs with CMCS data for downstream tasks has not been explored. In the context of CMCS data, we are only aware of full-fine-tuning PLMs [3,16]. Given that prompt-based learning relies on textual prompts, the design of such prompts for CMCS text is an open question.…”
Section: Introductionmentioning
confidence: 99%
“…Adapter-BERT outperforms fine-tuned BERT in terms of performance. Figure 2 illustrates the architecture of adapter-BERT 17,18 .…”
Section: Proposed Systemmentioning
confidence: 99%
“…Out of all these models, hybrid deep learning model CNN + BiLSTM works well to perform sentiment analysis with an accuracy of 66%. In 18 , aspect based sentiment analysis known as SentiPrompt which utilizes sentiment knowledge enhanced prompts to tune the language model. This methodology is used for triplet extraction, pair extraction and aspect term extraction.…”
mentioning
confidence: 99%
“…Through the study of news text classification algorithms, it was found that traditional machine learning methods are prone to losing useful semantic feature information in the process of text representation, while using models such as Word2Vec and glove for text representation and then sharing text contextual semantic information by training neural network models can learn more vector representations as features, which is significantly better than traditional machine in terms of classification accuracy learning methods [7][8]. However, models such as Word2Vec cannot solve the problem of multiple meanings of words, especially in the face of the sparse features and context-dependent nature of news headlines, and there are still many semantic problems to be solved [9][10].…”
Section: Key Issues In News Text Classificationmentioning
confidence: 99%