2015
DOI: 10.48550/arxiv.1505.05008
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Boosting Named Entity Recognition with Neural Character Embeddings

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
9
0

Year Published

2018
2018
2020
2020

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(9 citation statements)
references
References 0 publications
0
9
0
Order By: Relevance
“…Word embeddings are able to capture syntactic and semantic information, yet for tasks such as POS-tagging and NER, intraword morphological and shape information can also be very useful. Generally speaking, building natural language understanding systems at the character level has attracted certain research attention [29,30,31,32]. Better results on morphologically rich languages are reported in certain NLP tasks.…”
Section: Character Embeddingsmentioning
confidence: 99%
See 1 more Smart Citation
“…Word embeddings are able to capture syntactic and semantic information, yet for tasks such as POS-tagging and NER, intraword morphological and shape information can also be very useful. Generally speaking, building natural language understanding systems at the character level has attracted certain research attention [29,30,31,32]. Better results on morphologically rich languages are reported in certain NLP tasks.…”
Section: Character Embeddingsmentioning
confidence: 99%
“…Better results on morphologically rich languages are reported in certain NLP tasks. Santos and Guimaraes [31] applied character-level representations, along with word embeddings for NER, achieving state-of-the-art results in Portuguese and Spanish corpora. Kim et al [29] showed positive results on building a neural language model using only character embeddings.…”
Section: Character Embeddingsmentioning
confidence: 99%
“…Dernoncourt et al [21] followed the same model architecture to train the NER. Similarly, Santos et al [22] obtained the word representation from characters by CNN and concatenated the embeddings of words before feeding to bidirectional LSTM. The Viterbi algorithm has also been used for inferencing for NER.…”
Section: Related Workmentioning
confidence: 99%
“…Instead of the standard F 1 score, we follow the evaluation proposed in [17], which consists of a modified First HAREM F 1 score used to compare different models. Our choice is based on its wide adoption in the Portuguese NER literature [18], [19], [20].…”
Section: B Evaluationmentioning
confidence: 99%