2012
DOI: 10.1007/978-3-642-31178-9_17
|View full text |Cite
|
Sign up to set email alerts
|

Two-Stage Named-Entity Recognition Using Averaged Perceptrons

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0

Year Published

2013
2013
2023
2023

Publication Types

Select...
2
2
1

Relationship

2
3

Authors

Journals

citations
Cited by 5 publications
(8 citation statements)
references
References 3 publications
0
8
0
Order By: Relevance
“…Experiment I: Sampling pseudo-ground truth. Our first experiment aims to answer RQ1: What is the utility of our sampling methods for generating pseudo-ground truth 4 Using the smallest KB s (20%) results in about 15,000 tweets in the pseudo-ground truth. for a named entity recognizer?…”
Section: Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…Experiment I: Sampling pseudo-ground truth. Our first experiment aims to answer RQ1: What is the utility of our sampling methods for generating pseudo-ground truth 4 Using the smallest KB s (20%) results in about 15,000 tweets in the pseudo-ground truth. for a named entity recognizer?…”
Section: Resultsmentioning
confidence: 99%
“…We cater for this bias by randomly sampling 10,000 tweets from both the test set and the pseudo-ground truth and repeating our experiments ten times. 4 Ground truth is then assembled by linking the corpus of tweets using KB. This ground truth consists of 82,305 tweets, with 12,488 unique concepts.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…For the extraction, we employed the named entity recognizer of [2]. We chose this tool as it is one of the strongest named entity recognizers in the Dutch language area with a reported F1-score of 83.56% (see [2] for a comparison with other systems).…”
Section: Named Entity Extractionmentioning
confidence: 99%
“…We chose this tool as it is one of the strongest named entity recognizers in the Dutch language area with a reported F1-score of 83.56% (see [2] for a comparison with other systems). We used a preliminary version of the annotations in the SoNaR corpus [11] as a training set for the NER tagger; this set is annotated according to a rich NER tagging scheme that distinguishes the categories person, location, organisation, product, event and miscellaneous.…”
Section: Named Entity Extractionmentioning
confidence: 99%