Automatic speaker verification (ASV) has been successfully deployed for identity recognition. With increasing use of ASV technology in real-world applications, channel mismatch caused by the recording devices and environments severely degrade its performance, especially in the case of unseen channels. To this end, we propose a meta speaker embedding network (MSEN) via meta-learning to generate channel-invariant utterance embeddings. Specifically, we optimize the differences between the embeddings of a support set and a query set in order to learn a channel-invariant embedding space for utterances. Furthermore, we incorporate distribution optimization (DO) to stabilize the performance of MSEN. To quantitatively measure the effect of MSEN on unseen channels, we specially design the generalized crosschannel (GCC) evaluation. The experimental results on the HI-MIA corpus demonstrate that the proposed MSEN reduce considerably the impact of channel mismatch, while significantly outperforms other state-of-the-art methods.