2023
DOI: 10.1016/j.jksuci.2022.12.004
|View full text |Cite
|
Sign up to set email alerts
|

Neural POS tagging of shahmukhi by using contextualized word representations

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 7 publications
(4 citation statements)
references
References 30 publications
0
4
0
Order By: Relevance
“…Beside the contextualized word embeddings, we also experimented by incorporating part of speech(POS) tags and Word2Vec embeddings. POS tagging has not shown any improvements for NER (Tehseen et al, 2022(Tehseen et al, , 2023 and the F 1 -score remained around ∼0.78. We used the Stanford POS tagger (Toutanova et al, 2003) to tag the Wojood NER dataset and concatenated the POS encoding vectors with the word encoding vectors at the input layers of the models.…”
Section: Modelsmentioning
confidence: 88%
See 1 more Smart Citation
“…Beside the contextualized word embeddings, we also experimented by incorporating part of speech(POS) tags and Word2Vec embeddings. POS tagging has not shown any improvements for NER (Tehseen et al, 2022(Tehseen et al, , 2023 and the F 1 -score remained around ∼0.78. We used the Stanford POS tagger (Toutanova et al, 2003) to tag the Wojood NER dataset and concatenated the POS encoding vectors with the word encoding vectors at the input layers of the models.…”
Section: Modelsmentioning
confidence: 88%
“…Different machine and deep learning techniques have been used to perform NER, such as, Conditional Random Fields (CRF) (Patil et al, 2020;Bhumireddypalli et al, 2023), Support Vector Machines (SVM) (Mady et al, 2022), template-based (Cui et al, 2021), Recurrent Neural Networks (RNN) (Ahmad et al, 2020), Bidirectional LSTM (Tehseen et al, 2023), Transformer-based Models (e.g. BERT) Agrawal et al, 2022) and others.…”
Section: Introductionmentioning
confidence: 99%
“…They used the available Malayalam corpus of 280000 tokens and trained different RNN models including GRU, LSTM and Bi-LSTM that shows better results than the available POS taggers reaching 98% in terms of F-measure. Also, a bidirectional LSTM POS tagger for Shahmukhi is proposed in the study [20]. They first collected millions of tokens and manually annotated 130000 words to be tested in the study.…”
Section: Literature Review In Pos Tagging Low Resourced Language Usin...mentioning
confidence: 99%
“…POS tagging, also known as grammatical tagging is the process of labeling a word in a sentence as relating to a part of speech, based on both its definition and its context (1) . POS tagging is employed as a preprocessing activity by other Natural language processing (NLP) applications like machine translation and relation extraction to enhance their performance, it is a crucial study area to achieve high-quality research https://www.indjst.org/ in other NLP areas (2) . An example of POS tagging is given below, For example, <PPR> <JINT><AMN><<VM><VA><PUN>/Tai bohut polom koi bhabi pale/She understood very late.…”
Section: Introductionmentioning
confidence: 99%