2022
DOI: 10.1155/2022/8467349
|View full text |Cite|
|
Sign up to set email alerts
|

An Automated Toxicity Classification on Social Media Using LSTM and Word Embedding

Abstract: The automated identification of toxicity in texts is a crucial area in text analysis since the social media world is replete with unfiltered content that ranges from mildly abusive to downright hateful. Researchers have found an unintended bias and unfairness caused by training datasets, which caused an inaccurate classification of toxic words in context. In this paper, several approaches for locating toxicity in texts are assessed and presented aiming to enhance the overall quality of text classification. Gen… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 14 publications
(1 citation statement)
references
References 25 publications
0
1
0
Order By: Relevance
“…The use of BERT embeddings, combined with long short-term memory (LSTM) for identifying toxic content in social media, demonstrates the potential of these models in mental health analysis. Further, the integration of multi-level embeddings has shown promise in enhancing model performance, particularly in sentiment and emotion recognition tasks (Alsharef et al, 2022).…”
Section: Advancements In Embeddings and Applications In Mental Healthmentioning
confidence: 99%
“…The use of BERT embeddings, combined with long short-term memory (LSTM) for identifying toxic content in social media, demonstrates the potential of these models in mental health analysis. Further, the integration of multi-level embeddings has shown promise in enhancing model performance, particularly in sentiment and emotion recognition tasks (Alsharef et al, 2022).…”
Section: Advancements In Embeddings and Applications In Mental Healthmentioning
confidence: 99%