“…For example, dysarthric speakers of very low speech intelligibility exhibit clearer patterns of articulatory imprecision, decreased volume and clarity, increased dysfluencies, slower speaking rate and changes in pitch [29], while those diagonalized with mid or high speech intelligibility are closer to normal speakers. Such heterogeneity further increases the mismatch against normal speech and the difficulty in both speaker-independent (SI) ASR system development using limited impaired speech data and fine-grained personalization to individual users' data [3,25,30] So far the majority of prior researches to address the dysarthric speaker level diversity have been focused on using speaker-identity only either in speaker-dependent (SD) data augmentation [7,9,13,14,18,27], or in speaker adapted or dependent ASR system development [1, 3, 4, 7, 11-13, 19, 22, 25, 31-33]. In contrast, very limited prior researches have used speech impairment severity information for dysarthric speech recognition.…”