We present the first in-depth and large-scale study of misleading repurposing, in which a malicious user changes the identity of their social media account via, among other things, changes to the profile attributes in order to use the account for a new purpose while retaining their followers. We propose a definition for the behavior and a methodology that uses supervised learning on data mined from the Internet Archive's Twitter Stream Grab to flag repurposed accounts. We found over 100,000 accounts that may have been repurposed. Of those, 28% were removed from the platform after 2 years, thereby confirming their inauthenticity. We also characterize repurposed accounts and found that they are more likely to be repurposed after a period of inactivity and deleting old tweets. We also provide evidence that adversaries target accounts with high follower counts to repurpose, and some make them have high follower counts by participating in follow-back schemes. The results we present have implications for the security and integrity of social media platforms, for data science studies in how historical data is considered, and for society at large in how users can be deceived about the popularity of an opinion. The data and the code is available at https://github.com/tugrulz/MisleadingRepurposing.