Graph Convolutional Network (GCN) is extensively used in text classification tasks and performs well in the process of the non-euclidean structure data. Usually, GCN is implemented with the spatial-based method, such as Graph Attention Network (GAT). However, the current GCN-based methods still lack a more reasonable mechanism to account for the problems of contextual dependency and lexical polysemy. Therefore, an improved GCN (IGCN) is proposed to address the above problems, which introduces the Bidirectional Long Short-Term Memory (BiLSTM) Network, the Part-of-Speech (POS) information, and the dependency relationship. From a theoretical point of view, the innovation of IGCN is generalizable and straightforward: use the short-range contextual dependency and the longrange contextual dependency captured by the dependency relationship together to address the problem of contextual dependency and use a more comprehensive semantic information provided by the BiLSTM and the POS information to address the problem of lexical polysemy. What is worth mentioning, the dependency relationship is daringly transplanted from relation extraction tasks to text classification tasks to provide the graph required by IGCN. Experiments on three benchmarking datasets show that IGCN achieves competitive results compared with the other seven baseline models.