In natural language processing, dealing with the dynamics of languages, such as the arisen of new words, can be a challenge to models. In deep learning models, when a word is not presented in the training dataset, it is not known by the model and, therefore, considered out of vocabulary (OOV). Although many models manage to get around this barrier, sometimes it is necessary to learn the embedding of a new word. In this sense, a method is presented to obtain a dynamic contextual vector representation of a new word based in the BERT language model. To evaluate the method, we took the case of the arisen of the word 'voip' in scientific publications, obtaining an embedding close to 'telecommunications' and 'signalling', some of the main words with significance in relation to the context of the word of study, demonstrating that the proposed method offers an efficient way to obtain embeddings for new words.