Natural language inference (NLI) is a sentence-pair classification task w.r.t. the entailment relation. As already shown, certain deep learning architectures for NLI task -INFERSENT in particular -may be exploited for obtaining (supervised) universal sentence embeddings. Although INFERSENT approach to sentence embeddings has been recently outperformed in different tasks by transformer-based architectures (like BERT and its derivatives), it still remains a useful tool in many NLP areas and it also serves as a strong baseline. One of the greatest advantages of this approach is its relative simplicity. Moreover, in contrast to other approaches, the training of INFERSENT models can be performed on a standard GPU within hours. Unfortunately, the majority of research on sentence embeddings in general is done in/for English, whereas other languages are apparently neglected. In order to fill this gab, we propose a methodology for obtaining universal sentence embeddings in another language -arising from training INFERSENT-based sentence encoders on machine translated NLI corpus and present a transfer learning use-case on semantic textual similarity in Czech.In the following parts of this position paper we will elaborate on each step of the outlined process.