4th ACM International Conference on AI in Finance 2023
DOI: 10.1145/3604237.3626891
|View full text |Cite
|
Sign up to set email alerts
|

Making LLMs Worth Every Penny: Resource-Limited Text Classification in Banking

Lefteris Loukas,
Ilias Stogiannidis,
Odysseas Diamantopoulos
et al.

Abstract: Standard Full-Data classifiers in NLP demand thousands of labeled examples, which is impractical in data-limited domains. Fewshot methods offer an alternative, utilizing contrastive learning techniques that can be effective with as little as 20 examples per class. Similarly, Large Language Models (LLMs) like GPT-4 can perform effectively with just 1-5 examples per class. However, the performance-cost trade-offs of these methods remain underexplored, a critical concern for budget-limited organizations. Our work… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 12 publications
(1 citation statement)
references
References 31 publications
0
1
0
Order By: Relevance
“…Recently, generative pre-trained transformer (GPT) models with billion scale parameters such as GPT-3.5 ( https://platform.openai.com/docs/models/gpt-3-5 ) and GPT-4 ( Achiam et al 2023 ) have achieved state-of-the-art results on a wide range of NLP tasks, including text classification, summarization, question answering, and translation especially in zero or few-shot setting, which may potentially address the limitations mentioned above ( Gilardi et al 2023 , Gilson et al 2023 , Hendy et al 2023 , Loukas et al , 2023 , Manakhimova et al 2023 , Wang et al 2023a , 2023b ). These large language models (LLMs) require huge computational resources to train and are often trained on massive amounts of data including proprietary datasets.…”
Section: Introductionmentioning
confidence: 99%
“…Recently, generative pre-trained transformer (GPT) models with billion scale parameters such as GPT-3.5 ( https://platform.openai.com/docs/models/gpt-3-5 ) and GPT-4 ( Achiam et al 2023 ) have achieved state-of-the-art results on a wide range of NLP tasks, including text classification, summarization, question answering, and translation especially in zero or few-shot setting, which may potentially address the limitations mentioned above ( Gilardi et al 2023 , Gilson et al 2023 , Hendy et al 2023 , Loukas et al , 2023 , Manakhimova et al 2023 , Wang et al 2023a , 2023b ). These large language models (LLMs) require huge computational resources to train and are often trained on massive amounts of data including proprietary datasets.…”
Section: Introductionmentioning
confidence: 99%