Graduates of computer science degree programs are increasingly being asked to maintain large, multi-threaded software systems; however, the maintenance of such systems is typically not wellcovered by software engineering texts or curricula. We conducted a think-aloud study with 15 students in a graduate-level computer science class to discover the strategies that students apply, and to what effect, in performing corrective maintenance on concurrent software. We collected think-aloud and action protocols, and annotated the protocols for a number of behavioral attributes and maintenance strategies. We divided the protocols into groups based on the success of the participant in both diagnosing and correcting the failure. We evaluated these groups for statistically significant differences in these attributes and strategies.In this paper, we report a number of interesting observations that came from this study. All participants performed diagnostic executions of the program to aid program comprehension; however, the participants that used this as their predominant strategy for diagnosing the fault were all unsuccessful. Among the participants that successfully diagnosed the fault and displayed high confidence in their diagnosis, we found two commonalities. They all recognized that the fault involved the violation of a concurrent-programming idiom. And, they all constructed detailed behavioral models (similar to UML sequence diagrams) of execution scenarios. We present detailed analyses to explain the attributes that correlated with success or lack of success. Based on these analyses, we make recommendations for improving software engineering curriculums by better training students how to apply these strategies effectively.