Proceedings of the Thirteenth Conference on Computational Natural Language Learning - CoNLL '09 2009
DOI: 10.3115/1596374.1596384
|View full text |Cite
|
Sign up to set email alerts
|

A method for stopping active learning based on stabilizing predictions and the need for user-adjustable stopping

Abstract: A survey of existing methods for stopping active learning (AL) reveals the needs for methods that are: more widely applicable; more aggressive in saving annotations; and more stable across changing datasets. A new method for stopping AL based on stabilizing predictions is presented that addresses these needs. Furthermore, stopping methods are required to handle a broad range of different annotation/performance tradeoff valuations. Despite this, the existing body of work is dominated by conservative methods wit… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
113
0
1

Year Published

2011
2011
2023
2023

Publication Types

Select...
7
2
1

Relationship

0
10

Authors

Journals

citations
Cited by 59 publications
(115 citation statements)
references
References 13 publications
1
113
0
1
Order By: Relevance
“…Overfitting may be one possible reason to explain this observation. There have been some efforts to address this problem [325,326,327]. These methods are fairly similar, based on the notion of an intrinsic measure of stability or self-confidence of the learner and that active learning ceases to be useful once this measure begins to degrade.…”
Section: Other Possible Directions Of Future Workmentioning
confidence: 99%
“…Overfitting may be one possible reason to explain this observation. There have been some efforts to address this problem [325,326,327]. These methods are fairly similar, based on the notion of an intrinsic measure of stability or self-confidence of the learner and that active learning ceases to be useful once this measure begins to degrade.…”
Section: Other Possible Directions Of Future Workmentioning
confidence: 99%
“…S t and S t+1 ), we can stop the learning. We measure the difference of predicted sets using Kappa statistics (Bloodgood and Vijay-Shanker 2009). From our experiment, standard κ threshold at 0.99 makes the learning stop so early that the system cannot achieve high F 1 .…”
Section: Stopping Criteriamentioning
confidence: 99%
“…In addition to the confidence based stopping criteria, Bloodgood and Vijay-Shanker [4] proposed a stability based stopping criteria. Bloodgood and Vijay-Shanker consider stability as a necessary factor to determine when the active learner should terminate.…”
Section: Chapter 2 Related Workmentioning
confidence: 99%