Significant progress has been made in sentiment analysis over the past few years, especially due to the application of deep neural language models. However, there is a problem of transferability of trained models from one domain to another, especially for less studied languages such as Russian. We propose an approach to build cross-domain sentiment analysis models based on a two-stage procedure: first, we finetune a pre-trained RuBERT language model on a combined non-domain corpus, and then fine-tune this model on a small domain corpus. We conducted large-scale experiments with 30 sentiment annotated corpora across 12 domains. In order to increase the representativeness of news texts with high-quality annotation, we created a novel RuNews corpus, containing 1,823 news articles annotated by sentiment. The results show that finetuning the model using a small number (about several hundred) of annotated domain texts can significantly improve the performance of sentiment analysis for a new domain (on average by 4.6 p.p.). We also obtained the state-of-the-art results for 7 out of 14 test corpora.INDEX TERMS BERT, cross-domain models, neural language models, sentiment analysis