Therapeutic strategies for tumor control have traditionally assumed that maximizing reduction in tumor volume correlates with clinical efficacy. Unfortunately, this rapid decrease in tumor burden is almost invariably followed by the emergence of therapeutic resistance. Evolutionary based treatment strategies work to delay this inevitability by promoting the maintenance of tumoral heterogeneity. While these strategies have shown promise in recent clinical trials, they often rely on biological conjecture and intuition to derive parameters. Reproducibility of the success seen with this treatment paradigm is contingent on formal elucidation of underlying subclonal interactions. One such consequence of these interactions, “competitive release”, is an evolutionary phenomenon that describes the unopposed proliferation of resistant populations following maximally tolerated systemic therapies. While often assumed in evolutionary models of cancer, here we show the first empiric evidence of “competitive release” occurring in an in vitro tumor environment. We found that this phenomenon is modulated by both drug dose and initial population composition. As such, we observed that monotypic fitness differentials were insufficient to accurately predict the outcomes of this phenomenon. Instead, derivation of underlying frequency dependent evolutionary game dynamics is essential to understand resulting sub-population shifts through time. To evaluate the impact of these non-autonomous growth behaviors over longer time series, we used a range of commonly employed growth models, some of which are the foundation of ongoing clinical trials. While useful for identifying persistent qualitative features, we observed significant fragility and model specific behaviors that limited the ability of these models to make consistent quantitative predictions, even when the parameters were empirically derived.