BackgroundArtificial intelligence–generated content (AIGC) has stepped into the spotlight with the emergence of ChatGPT, making effective use of AIGC for education a hot topic.ObjectivesThis study seeks to explore the effectiveness of integrating AIGC into programming learning through debugging. First, the study presents three levels of AIGC integration based on varying levels of abstraction. Then, drawing on extended effective use theory, the study proposes the underlying mechanism of how AIGC integration impacts programming learning performance and computational thinking.MethodsThree debugging interfaces integrated with AIGC by ChatGPT were developed for this study according to three levels of AIGC integration design. The study conducts a between‐subject experiment with one control group and three experimental groups. Analysis of covariance and a structural equation model are employed to examine the effects.Results and ConclusionsThe results show that the second and third levels of abstraction in AIGC integration yield better learning performance and computational thinking, but the first level shows no difference compared to traditional debugging. The underlying mechanism suggests that the second and third levels of abstraction promote transparent interaction, which enhances representational fidelity and consequently impacts learning performance and computational thinking, as evidenced in test of the mechanism. Moreover, the study finds that learning fidelity weakens the effect of transparent interaction on representational fidelity. Our research offers valuable theoretical and practical insights.