While much recent work has examined how linguistic information is encoded in pretrained sentence representations, comparatively little is understood about how these models change when adapted to solve downstream tasks.Using a suite of analysis techniques-supervised probing, unsupervised similarity analysis, and layer-based ablations-we investigate how fine-tuning affects the representations of the BERT model. We find that while fine-tuning necessarily makes some significant changes, there is no catastrophic forgetting of linguistic phenomena. We instead find that fine-tuning is a conservative process that primarily affects the top layers of BERT, albeit with noteworthy variation across tasks. In particular, dependency parsing reconfigures most of the model, whereas SQuAD and MNLI involve much shallower processing. Finally, we also find that fine-tuning has a weaker effect on representations of out-of-domain sentences, suggesting room for improvement in model generalization.