Graphs (also called networks) are powerful data abstractions, but they are challenging to work with, as many machine learning methods may not be applied to them directly. Network Embedding (NE) methods resolve this by learning vector representations for the nodes, for subsequent use in downstream machinelearning tasks. Link Prediction is one such important downstream task, used for example in recommender systems. NE methods perform exceedingly well in accuracy for Link Prediction, but predictions following from the embeddings, whose dimensions have no intrinsic meaning, are not straightforward to understand. Explaining why predictions are made can increase trustworthiness, help understand the underlying models and give insight into what features of the network are important in light of the predictions, and answer posed regulatory requirements on the ability to explain machine-learning-based decisions. We study the problem of providing explanations for NE-based link predictions and introduce ExplaiNE, an approach to derive counterfactual explanations by identifying links in the network that explain link predictions. We show how ExplaiNE can be used generically on NE-based methods and consider ExplaiNE in more detail for Conditional Network Embedding, a particularly suitable state-of-art NE method. Extensive experiments demonstrate ExplaiNE's accuracy and scalability.