Fine-tuning pre-trained language models on downstream tasks with varying random seeds has been shown to be unstable, especially on small datasets. Many previous studies have investigated this instability and proposed methods to mitigate it. However, most studies only used the standard deviation of performance scores (SD) as their measure, which is a narrow characterization of instability. In this paper, we analyze SD and six other measures quantifying instability at different levels of granularity. Moreover, we propose a systematic framework to evaluate the validity of these measures. Finally, we analyze the consistency and difference between different measures by reassessing existing instability mitigation methods. We hope our results will inform the development of better measurements of fine-tuning instability. 1Layer-wise Learning Rate Decay (Howard and Ruder, 2018, LLRD) assigns decreasing learning rates from the topmost layer to the bottom layer by a constant hyper-parameter discounting factor η. Howard and Ruder (2018) empirically show that models trained using LLRD are more stable, by retaining more generalizable pre-trained knowledge in bottom layers, while forgetting specialized pre-train knowledge in top layers. Re-init (Zhang et al., 2021) stabilizes fine-tuning by re-initializing the top k layers of PLMs. The underlying intuition is similar to LLRD: top layers of PLMs contain more pre-train task specific knowledge, and transferring it may hurt stability. Douglas G Bonett and Thomas A Wright. 2000. Sample size requirements for estimating pearson, kendall and spearman correlations. Psychometrika, 65(1):23-28.