Proceedings of the 22nd International Conference on Computational Linguistics - COLING '08 2008
DOI: 10.3115/1599081.1599140
|View full text |Cite
|
Sign up to set email alerts
|

Stopping criteria for active learning of named entity recognition

Abstract: Active learning is a proven method for reducing the cost of creating the training sets that are necessary for statistical NLP. However, there has been little work on stopping criteria for active learning. An operational stopping criterion is necessary to be able to use active learning in NLP applications. We investigate three different stopping criteria for active learning of named entity recognition (NER) and show that one of them, gradient-based stopping, (i) reliably stops active learning, (ii) achieves nea… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
57
0

Year Published

2010
2010
2023
2023

Publication Types

Select...
6
3
1

Relationship

0
10

Authors

Journals

citations
Cited by 56 publications
(57 citation statements)
references
References 11 publications
(9 reference statements)
0
57
0
Order By: Relevance
“…Works exist which attempt to determine operating regions of the active learning cycle to switch between more exploration driven cycles to more exploitation driven cycles [9,31]. A closely related concept is attempting to indicate when active learning has achieved its maximum performance without cross-validation data, referred to as deriving a stopping criteria [14,44,61,71,76,79,80].…”
Section: Related Workmentioning
confidence: 99%
“…Works exist which attempt to determine operating regions of the active learning cycle to switch between more exploration driven cycles to more exploitation driven cycles [9,31]. A closely related concept is attempting to indicate when active learning has achieved its maximum performance without cross-validation data, referred to as deriving a stopping criteria [14,44,61,71,76,79,80].…”
Section: Related Workmentioning
confidence: 99%
“…To address this problem, some active-learning models are proposed [14,15]. These models showed that manual labeling cost can be reduced without or with only a little degrading of the performances.…”
Section: Previous Workmentioning
confidence: 99%
“…Similarly, Laws and Schütze [20] proposed to stop the active learning process when the performance (i.e., confidence), of the learner converges and the gradient of the performance curve approaches 0. Differing from the strategy proposed by Vlachos [35], the gradient based stopping criterion does not require the performance curve to have peaked before stopping.…”
Section: Chapter 2 Related Workmentioning
confidence: 99%