Even though knowledge graphs have proven very useful for several tasks, they are marked by incompleteness. Completion algorithms aim to extend knowledge graphs by predicting missing (subject, predicate, object) triples, usually by training a model to discern between correct (positive) and incorrect (negative) triples. However, under the open-world assumption in which a missing triple is not negative but unknown, negative triple generation is challenging. Although negative triples are known to drive the accuracy of completion models, its impact has not been thoroughly examined yet. To evaluate accuracy, test triples are considered positive and negative triples are derived from them. The evaluation protocol is thus impacted by the generation of negative triples, which remains to be analyzed. Another issue is that the knowledge graphs available for evaluation contain anomalies like severe redundancy, and it is unclear how anomalies affect the accuracy of completion models. In this paper, we analyze the impact of negative triple generation during both training and testing on translation-based completion models. We examine four negative triple generation strategies, which are also used to evaluate the models when anomalies in the test split are included and discarded. In addition to previously-studied anomalies like near-same predicates, we include another anomaly: knowledge present in the test that is missing from the training split. Our main conclusion is that the most common strategy for negative triple generation (local-closed world assumption) can be mimicked by a combination of a naïve and a immediate neighborhood strategies. This result suggests that completion models can be learned independently for certain subgraphs, which would render completion models useful in the context of knowledge graph evolution. Although anomalies are considered harmful since they artificially increase the accuracy of completion models, our results show otherwise for certain knowledge graphs, which calls for further research efforts. CCS CONCEPTS • Computing methodologies → Semantic networks; Supervised learning; • Information systems → Inconsistent data.