Code-mixing and code-switching (CMCS) are frequent features in online conversations. Classification of such text is challenging if one of the languages is low-resourced. Fine-tuning pre-trained multilingual language models (PMLMs) is a promising avenue for code-mixed text classification. In this paper, we explore adapter-based fine-tuning of PMLMs for CMCS text classification. We introduce sequential and parallel stacking of adapters, continuous fine-tuning of adapters, and training adapters without freezing the original model as novel techniques with respect to single-task CMCS text classification. We also present a newly annotated dataset for the classification of Sinhala-English codemixed and code-switched text data, where Sinhala is a low-resourced language. Our dataset of 10000 user comments has been manually annotated for five classification tasks: sentiment analysis, humor detection, hate speech detection, language identification, and aspects identification, thus making it the first publicly available Sinhala-English CMCS dataset with the largest number of annotation types. In addition to Adapter Based Fine-Tuning of PMLMs for CMCS Text Classification this dataset, we also carried out experiments on our proposed techniques with Kannada-English and Hindi-English datasets. These experiments confirm that our adapter-based PMLM fine-tuning techniques outperform, or are on par with the basic fine-tuning of PMLM models.