“…The learning of word embeddings has gained momentum in many Natural Language Processing (NLP) applications, ranging from text document summarisation (Mohd et al, 2020), fake news detection (Faustini and Covões, 2017;Silva et al, 2020), and term similarity measure (Lastra et al, 2019;Gali et al, 2019) to sentiment classification (Rezaeinia et al, 2019;Giatsoglou et al, 2017;Park et al, 2021), edutainment (Blanco et al, 2020), Named Entity Recognition (Turian et al, 2010;Gutiérrez-Batista et al, 2018), classification tasks (Jung et al, 2022) and personalization systems (Valcarce et al, 2019), just to name a few. Most popular methods consider a large corpus of texts and represent each word with a real-valued dense vector, which captures its meaning assuming that words sharing common contexts in the input corpus are semantically related to each other (and consequently their respective word vectors are close in the vector space) (Mikolov et al, 2013b).…”