Brain Computer Interface (BCI) is one of the fast-growing technological trends, which finds its applications in the field of the healthcare sector. In this work, 16 electrodes of Electroencephalography (EEG) placed according to the 10-20 electrode system are used to acquire the EEG signals. A BCI with EEG based imagined word prediction using Convolutional Neural Network (CNN) is modeled and trained to recognize the words imagined through the EEG brain signal, where the CNN model Alexnet and Googlenet are able to recognize the words due to visual stimuli namely, up, down, right, left and up to ten words. The performance metrics are improved with the Morlet Continuous wavelet transform applied at the pre-processing stage, with seven extracted features such as mean, standard deviation, skewness, kurtosis, bandpower, root mean square, and Shannon entropy. Based on the testing, Alexnet transfer learning model performed better as compared to Googlenet transfer learning model, as it achieved an accuracy of 90.3%, recall, precision, and F1 score of 91.4%, 90%, and 90.7% respectively for seven extracted features. However, the performance metrics decreased when the number of extracted features was reduced from seven to four, to 83.8%, 84.4%, 82.9%, and 83.6% respectively. This high accuracy further paves the way to future work on cross participant analysis, plan to involve a larger number of participants for testing and to enhance the deep learning neural networks to create the system developed to be suitable for EEG based mobile applications, which helps to identify what the words are imagined to be uttered by the speech-disabled persons.