The enormous size of the web and the vagueness of the terms used to formulate queries still pose a huge problem in achieving user satisfaction. To solve this problem, queries need to be disambiguated based on their context. One well-known technique for enhancing the effectiveness of information retrieval (IR) is query expansion (QE). It reformulates the initial query by adding similar terms that help in retrieving more relevant results. In this paper, we propose a new QE semantic approach based on the modified Concept2vec model using linked data. The novelty of our work is the use of query-dependent linked data from DBpedia as training data for the Concept2vec skip-gram model. We considered only the top feedback documents, and we did not use them directly to generate embeddings; we used their interlinked data instead. Also, we used the linked data attributes that have a long value, e.g., “dbo: abstract”, as training data for neural network models, and, we extracted from them the valuable concepts for QE. Our experiments on the Associated Press collection dataset showed that retrieval effectiveness can be much improved when a skip-gram model is used along with a DBpedia feature. Also, we demonstrated significant improvements compared to other approaches.