Knowledge graph completion (KGC) involves inferring missing entities or relationships within a knowledge graph, playing a crucial role across various domains, including intelligent question answering, recommendation systems, and dialogue systems. Traditional knowledge graph embedding (KGE) methods have proven effective in utilizing structured data and relationships. However, these methods often overlook the vast amounts of unstructured data and the complex reasoning capabilities required to handle ambiguous queries or rare entities. Recently, the rapid development of large language models (LLMs) has demonstrated exceptional potential in text comprehension and contextual reasoning, offering new prospects for KGC tasks. By using traditional KGE to capture the structural information of entities and relations to generate candidate entities and then reranking them with a generative LLM, the output of the LLM can be constrained to improve reliability. Despite this, new challenges, such as omissions and incorrect responses, arise during the ranking process. To address these issues, a knowledge-guided LLM reasoning for knowledge graph completion (KLR-KGC) framework is proposed. This model retrieves two types of knowledge from the knowledge graph—analogical knowledge and subgraph knowledge—to enhance the LLM’s logical reasoning ability for specific tasks while injecting relevant additional knowledge. By integrating a chain-of-thought (CoT) prompting strategy, the model guides the LLM to filter and rerank candidate entities, constraining its output to reduce omissions and incorrect responses. The framework aims to learn and uncover the latent correspondences between entities, guiding the LLM to make reasonable inferences based on supplementary knowledge for more accurate predictions. The experimental results demonstrate that on the FB15k-237 dataset, KLR-KGC outperformed the entity generation model (CompGCN), achieving a 4.8% improvement in MRR and a 5.8% improvement in Hits@1.