2023
DOI: 10.3390/electronics12224654
|View full text |Cite
|
Sign up to set email alerts
|

BookGPT: A General Framework for Book Recommendation Empowered by Large Language Model

Zhiyu Li,
Yanfang Chen,
Xuan Zhang
et al.

Abstract: With the continuous development and change exhibited by large language model (LLM) technology, represented by generative pretrained transformers (GPTs), many classic scenarios in various fields have re-emerged with new opportunities. This paper takes ChatGPT as the modeling object, incorporates LLM technology into the typical book resource understanding and recommendation scenario for the first time, and puts it into practice. By building a ChatGPT-like book recommendation system (BookGPT) framework based on C… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2
2
1

Relationship

0
5

Authors

Journals

citations
Cited by 6 publications
(2 citation statements)
references
References 30 publications
0
2
0
Order By: Relevance
“…In poetry, LLMs have demonstrated a basic grasp of poetic structure and form, yet they struggle significantly with capturing the essence of metaphorical and allegorical content [9,13]. Another key observation is the issue of cultural contextualization, where models lack the depth to understand historical and literary references integral to Chinese poetry [13,28,29,30,31]. Studies have shown that despite high accuracy in general language tasks, LLMs often produce outputs in poetry tasks that lack emotional depth, cultural resonance and sometimes even explainability [32,33,34,35,36].…”
Section: Llm and Chinese Language Processingmentioning
confidence: 99%
See 1 more Smart Citation
“…In poetry, LLMs have demonstrated a basic grasp of poetic structure and form, yet they struggle significantly with capturing the essence of metaphorical and allegorical content [9,13]. Another key observation is the issue of cultural contextualization, where models lack the depth to understand historical and literary references integral to Chinese poetry [13,28,29,30,31]. Studies have shown that despite high accuracy in general language tasks, LLMs often produce outputs in poetry tasks that lack emotional depth, cultural resonance and sometimes even explainability [32,33,34,35,36].…”
Section: Llm and Chinese Language Processingmentioning
confidence: 99%
“…Studies have shown that despite high accuracy in general language tasks, LLMs often produce outputs in poetry tasks that lack emotional depth, cultural resonance and sometimes even explainability [32,33,34,35,36]. Lastly, there is a noted improvement in the translation of contemporary texts, but classical poetry still poses a significant challenge, highlighting the gap in models' training on historical literature [37,38,31,39].…”
Section: Llm and Chinese Language Processingmentioning
confidence: 99%