In this work data-driven approaches to identify the gender of author of Russian text are investigated with the purpose to clarify, to what extent the machine learning models trained on texts of a certain genre could give accurate results on texts of other genre. The set of data corpora includes: one collected by a crowdsourcing platform, essays of Russian students (RusPersonality), Gender Imitation corpus, and the corpora used at Forum for Information Retrieval Evaluation 2017 (FIRE), containing texts from Facebook, Twitter and Reviews. We present the analysis of numerical experiments based on different features(morphological data, vector of character n-gram frequencies, LIWC and others) of input texts along with various machine learning models (neural networks, gradient boosting methods, CNN, LSTM, SVM, Logistic Regression, Random Forest). Results of these experiments are compared with the results of FIRE competition to evaluate effects of multi-genre training. The presented results, obtained on a wide set of data-driven models, establish the accuracy level for the task to identify gender of a author of a Russian text in the multi-genre case. As shown, an average loss in F1 because of training on a set of genre other than the one used to test is about 11.7%.