Proceedings of the 1st International Conference on Islam, Science and Technology, ICONISTECH 2019, 11-12 July 2019, Bandung, In 2020
DOI: 10.4108/eai.11-7-2019.2297618
|View full text |Cite
|
Sign up to set email alerts
|

File Training Generator For Indonesian Language In Named Entity Recognition Using Anago Library

Abstract: Named Entity Recognition (NER) or Named Entity Recognition and Classification (NERC) is one of the main components of an information extraction task that aims to detect and categorize named entities in a text. NER is generally used to detect people's names, place names, and organization of a document, but can also be extended to identify genes, proteins, and others as needed. NER is useful in many NLP (Natural Language Processing) applications such as question-answering, summaries, and dialog systems because i… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 1 publication
0
2
0
Order By: Relevance
“…Developed and optimized by Nakayama in 2017 with the combined technique BiLSTM-CRF [26], anaGo was implemented in Keras for NER and many other sequence labeling tasks. anaGo implements different pre-trained word embeddings as input; it also has the capability to self-generate word embedding based on training data [12,27]. The BiLSTM-CRF architecture is described in Fig.…”
Section: Modelsmentioning
confidence: 99%
See 1 more Smart Citation
“…Developed and optimized by Nakayama in 2017 with the combined technique BiLSTM-CRF [26], anaGo was implemented in Keras for NER and many other sequence labeling tasks. anaGo implements different pre-trained word embeddings as input; it also has the capability to self-generate word embedding based on training data [12,27]. The BiLSTM-CRF architecture is described in Fig.…”
Section: Modelsmentioning
confidence: 99%
“…Bi-LSTM uses two LSTM networks, forward (f1-4) and backward (b1-4). The vector representations from both networks are concatenated (c1-4) and inputted to the CRF tagging layer for label assignment [12,13,27]. The model consists of 10 layers with over 2 M parameters.…”
Section: Modelsmentioning
confidence: 99%