Abstract-Entity disambiguation with a knowledge base becomes increasingly popular in the NLP community. In this paper, we employ Freebase as the knowledge base, which contains significantly more entities than Wikipedia and others. While huge in size, Freebase lacks context for most entities, such as the descriptive text and hyperlinks in Wikipedia, which are useful for disambiguation. Instead, we leverage two features of Freebase, namely the naturally disambiguated mention phrases (aka aliases) and the rich taxonomy, to perform disambiguation in an iterative manner. Specifically, we explore both generative and discriminative models for each iteration. Experiments on 2, 430, 707 English sentences and 33, 743 Freebase entities show the effectiveness of the two features, where 90% accuracy can be reached without any labeled data. We also show that discriminative models with proposed split training strategy is robust against overfitting problem, and constantly outperforms the generative ones.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.