In the last decade, the quantitative analysis of diachronic changes in language and lexical semantic changes have become the subject of active research. A significant role was played by the development of new effective techniques of word embedding. This direction has been effectively demonstrated in a number of studies. Some of them have focused on the analysis of the optimal type of word2vec models, hyperparameters for training, and evaluation techniques. In this research, we used Corpus of Historical American English (COHA). The paper demonstrates the results of multiple training runs and the comparison of word2vec models with different variations of hyperparameters used for lexical semantic change detection. In addition to traditional word similarities and analogical reasoning tests, we used testing on an extended set of synonyms. We have evaluated word2vec models on the set of more than 100,000 English synsets that were randomly selected from the WordNet database. We have shown that changing the word2vec model parameters (such as a dimension of word embedding, a size of context window, a type of model, a word discard rate etc.) can significantly impact on the resulting word embedding vector space and the detected lexical semantic changes. Additionally, the results strongly depended on properties of the corpus, such as word frequency distribution.