Classification is the estimation of the class of each instance in a dataset, quantification is the estimation of the number of instances of each class in a dataset. Quantification methods typically assume that the data which is being quantified has the same class-conditional distribution as the data on which the quantifier was trained. This thesis addresses the situation where this assumption cannot be made, where there is class-conditional dataset shift between the training data and the test data. The work was motivated by sentiment analysis tasks using tweets on Twitter. By selecting users based on the content of their tweet, the users cannot be considered to have been randomly drawn from the population.In this thesis, domain adaptation methods from classification have been applied to the problem of quantification. Separating the data into explicit sub-domains and quantifying each sub-domain separately can increase quantification accuracy but under certain conditions it can also decrease it. An expression for expected quantification error was derived in closed-form with some simplifying assumptions. In tests on real datasets, a method based on this approach gave a modest improvement to quantification accuracy. Constructing a new feature representation has proved successful for domain adaptation in classification.An approach using Stacked Denoising Autoencoders to generate a new feature representation gave a 3.3% relative improvement in quantification accuracy. Finally, a method based on using Kernel Mean Matching for weighting instances in the training set gave a relative improvement in quantification accuracy of 10.7%. Experiments were conducted on publicly available datasets and also on a custom dataset of Twitter users. I would like to thank... David Weir and Novi Quadrianto for supervising me. Luc Berthouze for chairing my thesis committee. My fellow members of the department, in particular Chris Inskip, Oliver Thomas, Matti Lyra and Miro Batchkarov for helping me get things to work.