With the progressive nature of Human-Agent Teams becoming more and more useful for high-quality work output, there is a proportional need for bi-directional communication between teammates to increase efficient collaboration. This need is centered around the well-known issue of innate mistrust between humans and artificial intelligence, resulting in sub-optimal work. To combat this, computer scientists and humancomputer interaction researchers alike have presented and refined specific solutions to this issue through different methods of AI interpretability. These different methods include explicit AI explanations as well as implicit manipulations of the AI interface, otherwise known as AI transparency. Individually these solutions hold considerable merit in repairing the relationship of trust between teammates, but also have individual flaws. We posit that the combination of different interpretable mechanisms mitigates each other’s flaws and extenuates their strengths within human-agent teams.