Nowadays, text-based methods are widely applied to detect and measure a wide range of social science constructs, such as emotions or political attitudes. However, the validity of text-based measures can be difficult to assess. This is because text data is typically unstructured, complex, and noisy. Additionally, practical and conceptual guidance on how to validate text-based measures is scarce and inconsistent. But what are the consequences of this reality for applied research? Our work aims to answer this research question by providing a fundament for the systematic engagement with validity for text-based research in the social sciences. Based on a systematic review and qualitative expert interviews, we describe and identify differences and common themes in validation practices. Our results show that scholars applied a great variety of validation steps, which, however, were rarely selected on a conceptual understanding of validity. These findings are also supported by our qualitative results as we recorded considerable confusion about what actions ultimately represent valid validation steps and uncertainty on how these steps should then be reported. Overall, we call for more systematic efforts to develop a commonly shared language of validation practices to effectively document and evaluate validation steps for text as data methods.