2023
DOI: 10.3390/app13074225
|View full text |Cite
|
Sign up to set email alerts
|

Fine-Tuning BERT-Based Pre-Trained Models for Arabic Dependency Parsing

Abstract: With the advent of pre-trained language models, many natural language processing tasks in various languages have achieved great success. Although some research has been conducted on fine-tuning BERT-based models for syntactic parsing, and several Arabic pre-trained models have been developed, no attention has been paid to Arabic dependency parsing. In this study, we attempt to fill this gap and compare nine Arabic models, fine-tuning strategies, and encoding methods for dependency parsing. We evaluated three t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
3
1

Relationship

0
7

Authors

Journals

citations
Cited by 10 publications
(2 citation statements)
references
References 24 publications
0
2
0
Order By: Relevance
“…There have been previous developments in Arabic dependency parsing Marton et al, 2013;Zhang et al, 2015;Shahrour et al, 2016;Al-Ghamdi et al, 2023). However, they are not based on state-of-the-art (SOTA) developments in neural dependency parsing and pre- 'w+ ' 'hl' 's+' 'yšrHwn' '+hA' '?…”
Section: Introductionmentioning
confidence: 99%
“…There have been previous developments in Arabic dependency parsing Marton et al, 2013;Zhang et al, 2015;Shahrour et al, 2016;Al-Ghamdi et al, 2023). However, they are not based on state-of-the-art (SOTA) developments in neural dependency parsing and pre- 'w+ ' 'hl' 's+' 'yšrHwn' '+hA' '?…”
Section: Introductionmentioning
confidence: 99%
“…In terms of NLP applications, Alrumayyan and Al-Yahya [7] utilized language modeling and neural embeddings to support the task of jurisprudence principles. Al-Ghamdi et al [8] employed the fine-tuning of Arabic bidirectional encoder representations from transformer-based models to develop Arabic pre-trained models. Alanazi [9] used cryptocurrency-related Twitter text to classify pure and compound sentiments.…”
mentioning
confidence: 99%