Motivation: Word embedding approaches have revolutionized Natural Lange Processing (NLP) research. These approaches aim to map words to a low-dimensional vector space in which words with similar linguistic features are close in the vector space. These NLP approaches also preserve local linguistic features, such as analogy. Embedding-based approaches have also been developed for proteins. To date, such approaches treat amino acids as words, and proteins are treated as sentences of amino acids. These approaches have been evaluated either qualitatively, via visual inspection of the embedding space, or extrinsicaly, via performance on a downstream task. However, it is difficult to directly assess the intrinsic quality of the learned embeddings. Results: In this paper, we introduce dom2vec, an approach for learning protein domain embeddings. We also present four intrinsic evaluation strategies which directly assess the quality of protein domain embeddings. We leverage the hierarchy relationship of InterPro domains, known secondary structure classes, Enzyme Commision class information, and Gene Ontology annotations in these assessments. Importantly, these evaluations allow us to assess the quality of learned embeddings independently of a particular downstream task, similar to the local linguistic features used in traditional NLP. We also show that dom2vec embeddings are competitive with, or even outperform, state-of-the-art approaches on downstream tasks. Availability: The protein domain embeddings vectors and the entire code to reproduce the results will become available. Contact: melidis@l3s.uni-hannover.de