Heavily pre-trained transformers for language modelling, such as BERT, have shown to be remarkably effective for Information Retrieval (IR) tasks. IR benchmarks evaluate the effectiveness of (neural) ranking models based on the premise that a single query is used to instantiate the underlying information need. However, previous research has shown that (I) queries generated by users for a fixed information need are extremely variable and, in particular, (II) neural models are brittle and often easily make mistakes when tested with adversarial examples, i.e. examples with minimal modifications that do not change its label. Motivated by those observations we aim to answer the following question with our work: how robust are retrieval pipelines with respect to different variations in queries that do not change the queries' semantics? In order to obtain queries that are representative of users' querying variability, we first created a taxonomy based on the manual annotation of transformations occurring in a dataset (specifically UQV100) of user created query variations. For example, from the query 'cures for a bald spot' to the variation 'cures for baldness' we are applying a paraphrasing transformation that replaces words with synonyms. For each syntax-changing category of our taxonomy, we employ different automatic methods that when applied to a query generate a query variation. We conduct experiments on two datasets (TREC-DL-2019 and ANTIQUE) and create a total of 2430 query variations from 243 topics across both datasets. Our experimental results for two different IR tasks reveal that retrieval pipelines are not robust to query variations that maintain the content the same, with effectiveness drops of ∼20% on average when compared with the original query as provided in the datasets. Our findings indicate that further work is required to make retrieval pipelines with neural ranking models more robust and that IR collections should include query variations, e.g. using the methods proposed here, for a single information need to better understand models capabilities. The code and datasets are available at https://github.com/Guzpenha/query_variation_generators.