Large multilingual language models typically share their parameters across all languages, which enables cross-lingual task transfer, but learning can also be hindered when training updates from different languages are in conflict. In this paper, we propose novel methods for using languagespecific subnetworks, which control crosslingual parameter sharing, to reduce conflicts and increase positive transfer during finetuning. We introduce dynamic subnetworks, which are jointly updated with the model, and we combine our methods with metalearning, an established, but complementary, technique for improving cross-lingual transfer. Finally, we provide extensive analyses of how each of our methods affects the models.