Dataset distillation aims to generate small datasets with little information loss as large-scale datasets for reducing storage and training costs. Recent state-of-the-art methods mainly constrain the sample generation process by matching synthetic images and the original ones regarding gradients, embedding distributions, or training trajectories. Although there are various matching objectives, currently the method for selecting original images is limited to naive random sampling. We argue that random sampling inevitably involves samples near the decision boundaries, which may provide large or noisy matching targets. Besides, random sampling cannot guarantee the evenness and diversity of the sample distribution. These factors together lead to large optimization oscillations and degrade the matching efficiency. Accordingly, we propose a novel matching strategy named as Dataset distillation by REpresentAtive Matching (DREAM), where only representative original images are selected for matching. DREAM is able to be easily plugged into popular dataset distillation frameworks and reduce the matching iterations by 10 times without performance drop. Given sufficient training time, DREAM further provides significant improvements and achieves state-of-the-art performances.* Equal contribution. This work was partially done when Yangqing was an undergraduate intern at NUS. † project lead. 0 10 20 30 40 *UDGLHQW1RUP/ 0 50 100 150 200 250 6DPSOH1XPEHU 5DQGRPVDPSOHV '5($0VDPSOHV (a) The gradient norm distribution of the plane class in CIFAR10. Matched original images Synthetic images DREAM Random Sampling (b) The oscillation of synthetic samples during training.