Social media can be a major accelerator of the spread of misinformation, thereby potentially compromising both individual well-being and social cohesion. Despite significant recent advances, the study of online misinformation is a relatively young field facing several (methodological) challenges. In this regard, the detection of online misinformation has proven difficult, as online large-scale data streams require (semi-)automated, highly specific and therefore sophisticated methods to separate posts containing misinformation from irrelevant posts. In the present paper, we introduce the adaptive community-response (ACR) method, an unsupervised technique for the large-scale collection of misinformation on Twitter (now known as 'X'). The ACR method is based on previous findings showing that Twitter users occasionally reply to misinformation with fact-checking by referring to specific fact-checking sites (crowdsourced fact-checking). In a first step, we captured such misinforming but fact-checked tweets. These tweets were used in a second step to extract specific linguistic features (keywords), enabling us to collect also those misinforming tweets that were not fact-checked at all as a third step. We initially present a mathematical framework of our method, followed by an explicat algorithmic implementation. We then evaluate ACR on the basis of a comprehensive dataset consisting of > 25 million tweets, belonging to > 300 misinforming stories. Our evaluation shows that ACR is a valid and useful tool for large-scale collection of misinformation online: Text similarity measures clearly indicate correspondence between claims of false stories and collected tweets. Taken together, ACR's efficacy is based upon three pillars: (i) the adoption of prior, pioneering research in the field, (ii) a well-formalized mathematical framework and (iii) a robust empirical proof. This triad leads us to the conclusion that ACR has the potential to make a valuable contribution to further resolve methodological challenges of the field.