“…Representative works on graph generation from language models include knowledge graph completion models like Comet Hwang et al, 2021) that fine-tune GPT (Radford et al, 2019;Brown et al, 2020) and BART (Lewis et al, 2020), generation of event influence graphs (Tandon et al, 2019;Madaan et al, 2020), partially ordered scripts (Sakaguchi et al, 2021), temporal graphs (Madaan and Yang, 2021), entailment trees , proof graphs (Saha et al, 2020;Saha et al, 2021a) and commonsense explanation graphs (Saha et al, 2021b). Linguistic tasks like syntactic parsing Mohammadshahi and Henderson, 2021;Kondratyuk and Straka, 2019) and semantic parsing (Chen et al, 2020b;Shin et al, 2021) have also made use of language models.…”