Regarding the ubiquity of digitalization and electronic processing, an automated review processing system, also known as sentiment analysis, is crucial. There were many architectures and word embeddings employed for effective sentiment analysis. Deep learning is now-a-days becoming prominent for solving these problems as huge amounts of data get generated per second. In deep learning, word embedding acts as a feature representative and plays an important role. This paper proposed a novel deep learning architecture which represents hybrid embedding techniques that address polysemy, semantic and syntactic issues of a language model, along with justifying the model prediction. The model is evaluated on sentiment identification tasks, obtaining the result as F1-score 0.9254 and F1-score 0.88, for MR and Kindle dataset respectively. The proposed model outperforms many current techniques for both tasks in experiments, suggesting that combining context-free and context-dependent text representations potentially capture complementary features of word meaning. The model decisions justified with the help of visualization techniques such as t-SNE.