Although existing studies show that ranking questions may offer fine-grained information about people's opinions, others are concerned about measurement errors in these questions due to their complexity. We introduce a statistical framework to improve ranking data analysis by addressing measurement errors in ranking questions. First, we propose a formal framework to define measurement errors from random responses---arbitrary and meaningless responses based on a wide range of random patterns. We then quantify bias due to random responses, show that the bias may change our conclusion in any direction, and clarify why item order randomization alone does not solve the statistical issue. Next, we introduce our methodology based on two key design-based considerations: item order randomization and the addition of paired "anchor" ranking questions with known correct answers. These designs allow researchers to (1) learn about the direction of the bias due to random responses and (2) estimate the proportion of random responses, enabling our bias-corrected estimator. We illustrate our methods by studying people's relative importance of partisan identity compared to race, gender, and religious identities, finding that about 30% of respondents offered random responses and that they can change our substantive conclusions.