The paper examines automatic methods for classifying Russian-language sentences into two classes: ironic and non-ironic. The discussed methods can be divided into three categories: classifiers based on language model embeddings, classifiers using sentiment information, and classifiers with embeddings trained to detect irony. The components of classifiers are neural networks such as BERT, RoBERTa, BiLSTM, CNN, as well as an attention mechanism and fully connected layers. The irony detection experiments were carried out using two corpora of Russian sentences: the first corpus is composed of journalistic texts from the OpenCorpora open corpus, the second corpus is an extension of the first one and is supplemented with ironic sentences from the Wiktionary resource.
The best results were demonstrated by a group of classifiers based on embeddings of language models with the maximum F-measure of 0.84, achieved by a combination of RoBERTa, BiLSTM, an attention mechanism and a pair of fully connected layers in experiments on the extended corpus. In general, using the extended corpus produced results that were 2–5% higher than those of the basic corpus. The achieved results are the best for the problem under consideration in the case of the Russian language and are comparable to the best one for English.