This paper describes our contribution to Se-mEval 2022 Task 8 on Multilingual News Article Similarity. The aim was to test completely different approaches and distinguish the best performing. That is why we've considered systems based on Transformer-based encoders, NER-based, and NLI-based methods (and their combination with SVO dependency triplets representation). The results prove that Transformer models produce the best scores. However, there is space for research and approaches that give not yet comparable but more interpretable results. 1 https://huggingface.co/ distilbert-base-multilingual-cased 2 https://huggingface.co/ bert-base-multilingual-cased and https://huggingface.co/ bert-base-multilingual-uncased 3 https://huggingface.co/ xlm-roberta-base and https://huggingface.co/ xlm-roberta-large 4