Recent studies have recently exploited knowledge distillation (KD) technique to address timeconsuming annotation task in semantic segmentation, through which one teacher trained on a single dataset could be leveraged for annotating unlabeled data. However, in this context, knowledge capacity is restricted, and knowledge variety is rare in different conditions, such as cross-model KD, in which the single teacher KD prohibits the student model from distilling information using cross-domain context. To fix this concern, we have looked into learning a lightweight student from a group of teachers. To be more specific, we train five distinct lightweight convolutional neural networks (CNNs) for semantic segmentation on several datasets. Several state-of-the-art augmentation transformations have also been utilized in our training phase. The impacts of such training scenarios are then assessed in terms of student robustness and accuracy. As the main contribution of this paper, our proposed multi-teacher KD paradigm endows the student with the ability to amalgamate and capture a variety of knowledge illustrations from different sources. Results demonstrated that our method outperforms the existing studies on both clean and corrupted data in the semantic segmentation task while benefiting from our proposed score weight system. Experiments validate that our multi-teacher framework results in an improvement of 9% up to 32.18% compared to the singleteacher paradigm. Moreover, it is demonstrated that our paradigm surpasses previous supervised real-time studies in the semantic segmentation challenge.