Currently, named entity recognition (NER) is mainly evaluated on standard and well-annotated data sets.However, the construction of a well-annotated data set will consume a lot of manpower and time. In lots of applications of NER, data sets may contain a lot of noise, and a large part of noise comes from unlabeled entities.At present, the training process of most models treat unlabeled entities as nonentities, which causes these models to lean toward predicting most words of an input context as nonentities and greatly affects their performances. In this paper, as the first attempt, we innovatively propose an adaptive positive-unlabeled (adaPU) learning technology, and integrate the adaPU into a machine reading comprehension (MRC) framework for NER, which can still perform well on data sets with a large proportion of unlabeled entities. In our framework, to leverage the above problem that a model may predict most words of an input context as nonentities, we propose an adaPU learning technology by adjusting a loss coefficient of positive and negative samples. Moreover, instead of just constructing a fixed query for each entity type as input to MRC, we propose a new method of dynamically constructing multiple queries for each entity type, which also brings slight performance improvement for NER. Accordingly, we explore new training and entity inference strategies for