Self-Admitted Technical Debt (SATD) refers to a common practice in software engineering involving developers explicitly documenting and acknowledging technical debt within their projects. Identifying SATD in various contexts is a key activity for effective technical debt management and resolution. While previous research has focused on natural language processing techniques and specialized models for SATD identification, this study explores the potential of Large Language Models (LLMs) for this task. We compare the performance of three LLMs - Claude 3 Haiku, GPT 3.5 turbo, and Gemini 1.0 pro - against the generalization of the state-of-the-art model designed for SATD identification. Additionally, we investigate the impact of prompt engineering on the performance of LLMs in this context. Our findings reveal that LLMs achieve competitive results compared to the state-of-the-art model. However, when considering the Matthews Correlation Coefficient (MCC), we observe that the LLM performance is less balanced, tending to score lower than the state-of-the-art model across all four confusion matrix categories. Nevertheless, with a well-designed prompt, we conclude that the models’ bias can be improved, resulting in a higher MCC score.