Building end-to-end speech synthesisers for Indian languages is challenging, given the lack of adequate clean training data and multiple grapheme representations across languages. This work explores the importance of training multilingual and multi-speaker text-to-speech (TTS) systems based on language families. The objective is to exploit the phonotactic properties of language families, where small amounts of accurately transcribed data across languages can be pooled together to train TTS systems. These systems can then be adapted to new languages belonging to the same family in extremely low-resource scenarios.TTS systems are trained separately for Indo-Aryan and Dravidian language families, and their performance is compared to that of a combined Indo-Aryan+Dravidian voice. We also investigate the amount of training data required for a language in a multilingual setting. Same-family and cross-family synthesis and adaptation to unseen languages are analysed. The analyses show that language family-wise training of Indic systems is the way forward for the Indian subcontinent, where a large number of languages are spoken.Index Terms-end-to-end speech synthesis, Indian languages, language families, low-resource• This work is one of the first attempts to study the importance of language families in the context of speech synthesis.• We compare language family-specific Indo-Aryan (IA) and Dravidian (Dr) models with a combined Indo-Aryan+Dravidian (IA+Dr) system. • We also assess the performance of models trained in datastressed situations. We reduce the training data used per language in the multilingual voice.