Proceedings of the 1st Workshop on Natural Language Reasoning and Structured Explanations (NLRSE) 2023
DOI: 10.18653/v1/2023.nlrse-1.7
|View full text |Cite
|
Sign up to set email alerts
|

Knowledge-Augmented Language Model Prompting for Zero-Shot Knowledge Graph Question Answering

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
7
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
3
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 18 publications
(7 citation statements)
references
References 0 publications
0
7
0
Order By: Relevance
“…They have grouped these works into three broad categories—knowledge‐graph driven inference, pretraining, and fine‐tuning. Knowledge graphs are also being considered as a way to generate relevant prompts at inference time as proposed in Baek et al (2023), and in Wu et al (2023). However, these works use only the textual components but not the graph structures.…”
Section: Knowledge Graphs and Large Language Models—an Emerging Areamentioning
confidence: 99%
“…They have grouped these works into three broad categories—knowledge‐graph driven inference, pretraining, and fine‐tuning. Knowledge graphs are also being considered as a way to generate relevant prompts at inference time as proposed in Baek et al (2023), and in Wu et al (2023). However, these works use only the textual components but not the graph structures.…”
Section: Knowledge Graphs and Large Language Models—an Emerging Areamentioning
confidence: 99%
“…A variety of applications have integrated LLMs and knowledge graphs, including assisting in the development or curation of KGs by extracting entities and relationships from free text [14][15][16][17][18].…”
Section: Related Workmentioning
confidence: 99%
“…Kan et al (2023) demonstrate how knowledge-aware prompts can help improve the performance of vision-language models. Baek, Aji, and Saffari (2023) use prompting to augment LLMs for domain-specific question answering using a knowledge graph. We find prompting techniques, generalized as prompt templates 6 , appropriate for ChEdBot as they can be intuitively grasped by lecturers.…”
Section: Common Sense and Domain Knowledge In Llmsmentioning
confidence: 99%