“…Large Language Models (LLMs) have succeeded in advancing the state-of-the-arts for many Natural Language Processing (NLP) tasks [Devlin et al, 2019, Brown et al, 2020, Rae et al, 2021, Thoppilan et al, 2022, Chowdhery et al, 2022, Scao et al, 2022, Zhang et al, 2022b, Bai et al, 2022, Touvron et al, 2023, benefiting from the ultra-large-scale training corpora and computation resources. To unleash the LLMs' power of adaptation on unseen tasks without any parameter updates, in-context learning (ICL) has become one of the flourishing research topics, aiming at generating the prediction by conditioning on a few labeled exemplars (Figure 1 (a)) [Dong et al, 2023, Zhao et al, 2021, Shin et al, 2022, Lu et al, 2022.…”