Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) 2023
DOI: 10.18653/v1/2023.acl-long.129
|View full text |Cite
|
Sign up to set email alerts
|

Z-ICL: Zero-Shot In-Context Learning with Pseudo-Demonstrations

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(2 citation statements)
references
References 0 publications
0
2
0
Order By: Relevance
“…Due to its simplicity yet effectiveness and versatileness across diverse tasks, several approaches have been introduced to improve the quality of the LLM context. To mention a few, Lyu et al (2023)…”
Section: Large Language Modelsmentioning
confidence: 99%
“…Due to its simplicity yet effectiveness and versatileness across diverse tasks, several approaches have been introduced to improve the quality of the LLM context. To mention a few, Lyu et al (2023)…”
Section: Large Language Modelsmentioning
confidence: 99%
“…Previous work(Perez et al, 2021;Schick and Schütze, 2022;Bragg et al, 2021) have fully discussed the importance of true fewshot setting for ICL Min et al (2022a)Lu et al (2022);Lyu et al (2023) also try to improve ICL under this setting. Our approach for ICL examples reweighting is under true few-shot setting.7 ConclusionIn this paper, we find that treating demonstration examples equally can be a bottleneck for ICL, and assigning proper weights to examples can significantly improve performance.…”
mentioning
confidence: 99%