Finding relevant research papers is a challenging task due to the enormous number of scientific publications published each year. In recent years, the scientific community has delved into the analysis of citations at a deep level, specifically examining the content of the papers, in order to identify more important documents. Citations serve as potential parameters for determining linkages between research articles. They have been extensively used for various academic purposes, such as calculating journal impact factors, determining researchers' h-index, allocating research grants, and identifying the latest research trends. However, it has been argued by researchers that not all citations are equally influential. As a result, alternative techniques have been proposed to identify important citations based on content, metadata, and bibliographic information. Nevertheless, the current state-of-the-art approaches still require further improvement. Additionally, the use of deep learning models and word embedding techniques in this context has not been extensively studied. In this research work, we propose an approach based on two primary modules: 1) Section-wise citation count, and 2) metadata-based analysis of citation intent. Our study also involves conducting several experiments using deep learning models combined with FastText, word2vec, and BERT-based word embeddings to perform citation analysis. These experiments were conducted on two benchmark datasets, and the results were compared with a contemporary study that employed a rich set of content-based features for classification. Our findings indicate that the deep learning CNN model coupled with FastText word embedding achieves the best results in terms of accuracy, precision, and recall. It outperforms the existing state-of-the-art model with a precision score of 0. 97.