Knowledge graphs can be used to enhance text search and access by augmenting textual content with relevant background knowledge. While many large knowledge graphs are available, using them to make semantic connections between entities mentioned in the textual content remains to be a difficult task. In this work, we therefore introduce
contextual path generation (CPG)
which refers to the task of generating knowledge paths,
contextual path
, to explain the semantic connections between entities mentioned in textual documents with given knowledge graph. To perform CPG task well, one has to address its three challenges, namely path relevance, incomplete knowledge graph, and path well-formedness. This paper designs a
two-stage framework
the comprising of the following: (1) a
knowledge-enabled embedding matching
and
learning-to-rank with multi-head self attention
context extractor to determine a set of context entities relevant to both the query entities and context document, and (2) a
non-monotonic path generation method with pretrained transformer
to generate high quality contextual paths. Our experiment results on two real-world datasets show that our best performing CPG model successfully recovers 84.13% of ground truth contextual paths, outperforming the context window baselines. Finally, we demonstrate that non-monotonic model generates more well-formed paths compared to the monotonic counterpart.