3rd International Workshop on Open Challenges in Online Social Networks 2023
DOI: 10.1145/3599696.3612895
|View full text |Cite
|
Sign up to set email alerts
|

Analyzing the Use of Large Language Models for Content Moderation with ChatGPT Examples

Mirko Franco,
Ombretta Gaggi,
Claudio E. Palazzi
Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 8 publications
(3 citation statements)
references
References 17 publications
0
3
0
Order By: Relevance
“…[5][6][7][8], text generation (content creation, code generation for programming languages, etc.) [9,10], question answering, conversational AI [11,12], language summarization [13,14], content recommendation and moderation [15,16], search engines, web pages and documents ranking [17], and data extraction and knowledge graph creation [18,19].…”
Section: State-of-the-artmentioning
confidence: 99%
“…[5][6][7][8], text generation (content creation, code generation for programming languages, etc.) [9,10], question answering, conversational AI [11,12], language summarization [13,14], content recommendation and moderation [15,16], search engines, web pages and documents ranking [17], and data extraction and knowledge graph creation [18,19].…”
Section: State-of-the-artmentioning
confidence: 99%
“…The high interconnectedness of these online communities, both within and across different social media platforms, can hence lead to the swift proliferation of extreme viewpoints and associated mis/disinformation. Social networks like Facebook employ a combination of automated systems and human moderators to identify and combat misinformation on their platforms [32], [33]. Machine learning algorithms are used to detect potentially problematic content, which is then reviewed by human moderators to determine whether it violates the platform's community standards [33].…”
Section: Introductionmentioning
confidence: 99%
“…Machine learning algorithms are used to detect potentially problematic content, which is then reviewed by human moderators to determine whether it violates the platform's community standards [33]. However, this approach faces several challenges, such as the risk of biased decisions by automated systems and the difficulty of balancing free speech with the need to protect users from harm [32], [33]. Additionally, the increasing sophistication of large language models (LLMs) like ChatGPT has raised questions about their potential use in content moderation, as well as the associated risks and limitations [32].…”
Section: Introductionmentioning
confidence: 99%