2021 IEEE International Conference on Big Data (Big Data) 2021
DOI: 10.1109/bigdata52589.2021.9672070
|View full text |Cite
|
Sign up to set email alerts
|

Multilingual Financial Word Embeddings for Arabic, English and French

Abstract: Natural Language Processing is increasingly being applied to analyse the text of many different types of financial documents. For many tasks, it has been shown that standard language models and tools need to be adapted to the financial domain in order to properly represent domain specific vocabulary, styles and meanings. Previous work has almost exclusively focused on English financial text, so in this paper we describe the creation of novel financial word embeddings for three languages: English, French and Ar… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2
1
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(3 citation statements)
references
References 15 publications
0
3
0
Order By: Relevance
“…In the landscape of generation tasks, recent studies have employed diverse models to address the challenges posed by different datasets, showcasing notable advancements in the Table 2 AraBERT (Target/BERT), as examined in [31], and demonstrated its effectiveness in the generation task when applied to the Saaq al-Bambuu dataset, achieving a precision (P) of 0.848, a recall (R) of 0.823, and an F1-score of 0.879. The study in [33] introduced AraT5, utilizing the XSum and OrangeSum datasets, which showed signi cant improvements, with R1, R2, and BLEU values of 7.5, 18.30, and an undetermined value, respectively. Furthermore, the application of mT5 to the APGC dataset, as discussed in [34], resulted in a precision of 71.6%, an F1-score of 0.820, and a BLEU score of 97.5.…”
Section: Related Workmentioning
confidence: 99%
“…In the landscape of generation tasks, recent studies have employed diverse models to address the challenges posed by different datasets, showcasing notable advancements in the Table 2 AraBERT (Target/BERT), as examined in [31], and demonstrated its effectiveness in the generation task when applied to the Saaq al-Bambuu dataset, achieving a precision (P) of 0.848, a recall (R) of 0.823, and an F1-score of 0.879. The study in [33] introduced AraT5, utilizing the XSum and OrangeSum datasets, which showed signi cant improvements, with R1, R2, and BLEU values of 7.5, 18.30, and an undetermined value, respectively. Furthermore, the application of mT5 to the APGC dataset, as discussed in [34], resulted in a precision of 71.6%, an F1-score of 0.820, and a BLEU score of 97.5.…”
Section: Related Workmentioning
confidence: 99%
“…extractive approaches ( (Gupta and Lehal, 2010)), or by generating the summary from scratch (i.e. abstractive methods (Moratanch and Chitrakala, 2016;Zmandar et al, 2021)). Extractive methods have been a popular venue for summarising text due to their relative simplicity and the comparatively high requirements of abstractive methods for computational resources and available data.…”
Section: Related Workmentioning
confidence: 99%
“…Extractive summarisation involves selecting the most relevant sentences from the source text, while abstractive summarisation generates a summary by rephrasing the text into new sentences. While extractive summarisation can be simpler and more efficient, abstractive summarisation has the potential to produce more informative and coherent summaries (Zmandar et al, 2021;El-Haj et al, 2010).…”
Section: Introductionmentioning
confidence: 99%