Music streaming services feature billions of playlists created by users, professional editors or algorithms. In this content overload scenario, it is crucial to characterise playlists, so that music can be effectively organised and accessed. Playlist titles and descriptions are proposed in natural language either manually by music editors and users or automatically from pre-defined templates. However, the former is time-consuming while the latter is limited by the vocabulary and covered music themes. In this work, we propose PLAYNTELL, a dataefficient multi-modal encoder-decoder model for automatic playlist captioning. Compared to existing music captioning algorithms, PLAYN-TELL leverages also linguistic and musical knowledge to generate correct and thematic captions. We benchmark PLAYNTELL on a new editorial playlists dataset collected from two major music streaming services. PLAYN-TELL yields 2x-3x higher BLEU@4 and CIDEr than state of the art captioning algorithms.