Named entity recognition (NER) is one fundamental task in natural language processing, which is usually viewed as a sequence labeling problem and typically addressed by neural conditional random field (CRF) models, such as BiLSTM-CRF. Intuitively, the entity types contain rich semantic information and the entity type sequence in a sentence can globally reflect the sentence-level semantics. However, most previous works recognize named entities based on the feature representation of each token in the input sentence, and the token-level features cannot capture the global-entity-type-related semantic information in the sentence. In this paper, we propose a joint model to exploit the global-type-related semantic information for NER. Concretely, we introduce a new auxiliary task, namely sentence-level entity type sequence prediction (TSP), to supervise and constrain the global feature representation learning process. Furthermore, a multitask learning method is used to integrate the global-type-related semantic information into the NER model. Experiments on the four datasets in different languages and domains show that our final model is highly effective, consistently outperforming the BiLSTM-CRF baseline and leading to competitive results on all datasets.