Robot co-workers, like human co-workers, make mistakes that undermine trust. Yet, trust is just as important in promoting human-robot collaboration as it is in promoting human-human collaboration. In addition, individuals can significantly differ in their attitudes toward robots, which can also impact or hinder their trust in robots. To better understand how individual attitude can influence trust repair strategies, we propose a theoretical model that draws from the theory of cognitive dissonance. To empirically verify this model, we conducted a between-subjects experiment with 100 participants assigned to one of four repair strategies (apologies, denials, explanations, or promises) over three trust violations. Individual attitudes did moderate the efficacy of repair strategies and this effect differed over successive trust violations. Specifically, repair strategies were most effective relative to individual attitude during the second of the three trust violations, and promises were the trust repair strategy most impacted by an individual's attitude.