2021
DOI: 10.1007/978-981-16-0401-0_14
|View full text |Cite
|
Sign up to set email alerts
|

Analysis of Contextual and Non-contextual Word Embedding Models for Hindi NER with Web Application for Data Collection

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
2
2

Relationship

0
7

Authors

Journals

citations
Cited by 13 publications
(10 citation statements)
references
References 11 publications
0
9
0
Order By: Relevance
“…Similarly, Lorini et al used context-independent multilingual embeddings for their flood recognition system based on online social media called European Flood Awareness System (EFAS) in [16]. However, context-independent word embedding typically fails to capture relevant information [2]. Torres et al explored crisis-related conversations in a cross-lingual setting [26].…”
Section: Related Workmentioning
confidence: 99%
“…Similarly, Lorini et al used context-independent multilingual embeddings for their flood recognition system based on online social media called European Flood Awareness System (EFAS) in [16]. However, context-independent word embedding typically fails to capture relevant information [2]. Torres et al explored crisis-related conversations in a cross-lingual setting [26].…”
Section: Related Workmentioning
confidence: 99%
“…Contextual word embeddings, on the other hand, take the context of each word into account when encoding words in a sentence structure. BERT, Roberta and XLM are popular contextual embedding methods based on the transformer architecture [4,9], which is a recent breakthrough in the field of Natural Language Processing (NLP). The transformer was originally introduced as a means of improving neural machine translations [7,29].…”
Section: Contextual Word Embeddingsmentioning
confidence: 99%
“…DistilRoBERTa and XLM are transformer-based models that support both the fine-tuning and feature-based approaches [4]. As discussed earlier, the finetuning approach involves re-using the entire architecture for downstream tasks.…”
Section: Contextual Word Embeddingsmentioning
confidence: 99%
See 1 more Smart Citation
“…In this paper, we tried to bring about machine learning approaches built upon a novel way of generating word embeddings through various pretrained multilingual BERT models (Barua et al, 2020) for improving the dependency relation prediction. These models are fine-tuned to improve 1 and study the variational changes within the models and improving the performance of dependency parsing for Tamil Language.…”
Section: Introductionmentioning
confidence: 99%