Summary. Although conceptually quite simple, decision trees are still amongst the most popular classifiers applied to real-world problems. Their popularity is due to a number of factors -core amongst these is their ease of comprehension, robust performance and fast data processing capabilities. Additionally feature selection is implicit within the decision tree structure.This chapter introduces the basic ideas behind decision trees, focusing on decision trees which only consider a rule relating to a single feature at a node (therefore making recursive axis-parallel slices in feature space to form their classification boundaries). The use of particle swarm optimisation (PSO) to train near optimal decision trees is discussed, and PSO is applied both in a single objective formulation (minimising misclassification cost), and multi-objective formulation (trading off misclassification rates across classes).Empirical results are presented on popular classification data sets from the wellknown UCI machine learning repository, and PSO is demonstrated as being fully capable of acting as an optimiser for trees on these problems. Results additionally support the argument that multi-objectification of a problem can improve uniobjective search in classification problems.