To help address programming plagiarism and collusion, students should be informed about acceptable practices and about program similarity, both coincidental and non-coincidental. However, current approaches are usually manual, brief, and delivered well before students are in a situation where they might commit academic misconduct. This paper presents an assessment submission system with automated, personalized, and timely formative feedback that can be used in institutions that apply some leniency in early instances of plagiarism and collusion. If a student’s submission shares coincidental or non-coincidental similarity with other submissions, personalized similarity reports are generated for the involved submissions and the students are expected to explain the similarity and resubmit the work. Otherwise, a report simulating similarities is sent just to the author of the submitted program to enhance their knowledge. Results from two quasi-experiments involving two academic semesters suggest that students with our approach are more aware of programming plagiarism and collusion, including the futility of some program disguises. Further, their submitted programs have lower similarity even at the level of program flow, suggesting that they are less likely to have engaged in programming plagiarism and collusion. Student behavior while using the system is also analyzed based on the statistics of the generated reports and student justifications for the reported similarities.