The loss of V2 has received considerable attention in the past with some theories linking it to learning (e.g. Lightfoot 1999, Yang 2002). Here, we use artificial language learning experiments to test, in a controlled setting, what factors affect learning of V2. Specifically, we build on previous work demonstrating a general beneficial effect of input variability. We explore the role of variation in clause-initial constituents by comparing artificial languages that differ both in the kinds of grammatical categories that tend to appear in initial position, and the level of variability present. We find that these different distributions of clause-initial constituents indeed affect V2 learning outcomes. However, contrary to our predictions, a language with the highest level of variability is not the best learnt. Rather, a language containing many adjunct-initial sentences was learnt best. We discuss the possibility that a high quantity of clause-initial adjuncts is in fact important to acquiring V2 grammars in natural language. We find further support for this in corpus data indicating a high proportion of adjunct-initial sentences in stable V2 languages and a low proportion in languages that had been in the process of losing V2. We also discuss the role of variability in grammatical categories rather than roles, which might give languages with many clause-initial adjuncts an advantage. Taken together, our findings establish the first evidence for a causal link between the reduction of evidence and the loss of V2.