Findings of the Association for Computational Linguistics: EMNLP 2022 2022
DOI: 10.18653/v1/2022.findings-emnlp.193
|View full text |Cite
|
Sign up to set email alerts
|

In-Context Learning for Few-Shot Dialogue State Tracking

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
7
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 20 publications
(7 citation statements)
references
References 0 publications
0
7
0
Order By: Relevance
“…Pretrained with the internet corpora, LLMs are already familiar with the syntax of formal query languages such as SQL (Hu et al, 2022;Poesia et al, 2022;Arora et al, 2023). When given simple SQL schemas, they can perform zero-shot semantic parsing of simple natural language queries into formal queries.…”
Section: Few-shot Seq2seq Semantic Parsingmentioning
confidence: 99%
See 1 more Smart Citation
“…Pretrained with the internet corpora, LLMs are already familiar with the syntax of formal query languages such as SQL (Hu et al, 2022;Poesia et al, 2022;Arora et al, 2023). When given simple SQL schemas, they can perform zero-shot semantic parsing of simple natural language queries into formal queries.…”
Section: Few-shot Seq2seq Semantic Parsingmentioning
confidence: 99%
“…Niu et al (2023) use a few-shot prompted Codex model to break down the natural language input to make the task easier for a smaller semantic parser. LLMs have also been applied to semantic parsing on relational databases (Hu et al, 2022;Poesia et al, 2022;Arora et al, 2023). The schemas used in these projects are very small when compared to Wikidata.…”
Section: Kbqa Benchmarksmentioning
confidence: 99%
“…LLM DST. IC-DST (Hu et al, 2022) is an incontext learning (ICL) framework that enables fewshot DST with LLMs. The prediction is the change in each turn pair instead of the accumulated dialogue states.…”
Section: Dialogue State Trackingmentioning
confidence: 99%
“…For DST, ; Hudeček and Dušek (2023) prompts LLM with human-authored task descriptions or incontext exemplars. Hu et al (2022) improves the in-context learning for DST performance by incorporating a retriever to fetch useful exemplars. King and Flanigan (2023) further increases the diversity of in-context exemplars and improves the decoding mechanism.…”
Section: Introductionmentioning
confidence: 99%
“…
In executable task-oriented semantic parsing, the system aims to translate users' utterances in natural language to machine-interpretable programs (API calls) that can be executed according to pre-defined API specifications. With the popularity of Large Language Models (LLMs), in-context learning offers a strong baseline for such scenarios, especially in data-limited regimes (Hu et al, 2022;. However, LLMs are known to hallucinate and therefore pose a formidable challenge in constraining generated content (Parikh et al, 2020).
…”
mentioning
confidence: 99%