2022
DOI: 10.3390/math10030488
|View full text |Cite
|
Sign up to set email alerts
|

A New Hybrid Based on Long Short-Term Memory Network with Spotted Hyena Optimization Algorithm for Multi-Label Text Classification

Abstract: An essential work in natural language processing is the Multi-Label Text Classification (MLTC). The purpose of the MLTC is to assign multiple labels to each document. Traditional text classification methods, such as machine learning usually involve data scattering and failure to discover relationships between data. With the development of deep learning algorithms, many authors have used deep learning in MLTC. In this paper, a novel model called Spotted Hyena Optimizer (SHO)-Long Short-Term Memory (SHO-LSTM) fo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
19
0
1

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 47 publications
(20 citation statements)
references
References 70 publications
0
19
0
1
Order By: Relevance
“…Furthermore, SHO-CNN also achieves higher performance when compared to the baseline and other state-of-the-art approaches. LSTM, an upgraded version of CNN, is optimized using SHO [45], resulting in a higher hamming loss than SHO-CNN but superior micro-f1 and macro-f1 performance. Table 6 shows a comparison of the proposed SHO-CNN model with various state-of-the-art approaches on the RCV1-v2, Reuters21578, Slashdot, and NELA-GT-2019 datasets.…”
Section: Resultsmentioning
confidence: 99%
See 3 more Smart Citations
“…Furthermore, SHO-CNN also achieves higher performance when compared to the baseline and other state-of-the-art approaches. LSTM, an upgraded version of CNN, is optimized using SHO [45], resulting in a higher hamming loss than SHO-CNN but superior micro-f1 and macro-f1 performance. Table 6 shows a comparison of the proposed SHO-CNN model with various state-of-the-art approaches on the RCV1-v2, Reuters21578, Slashdot, and NELA-GT-2019 datasets.…”
Section: Resultsmentioning
confidence: 99%
“…Table 6 shows a comparison of the proposed SHO-CNN model with various state-of-the-art approaches on the RCV1-v2, Reuters21578, Slashdot, and NELA-GT-2019 datasets. [92] 0.872 0.855 0.682 LP [93] 0.874 0.858 0.691 CNN [25] 0.891 0.849 0.727 BERT [94] 0.733 0.877 0.667 CNN-RNN [95] 0.872 0.849 0.752 SGM [96] 0.814 0.869 0.712 SGM-GE [97] 0.753 0.878 0.762 Seq2Set [98] 0.736 0.879 0.751 ML-R [50] 0.793 0.898 0.695 Seq2Tree [99] 0.732 0.868 0.700 TextRCNN [100] 0.765 0.815 0.592 HiAGM [100] 0.768 0.839 0.633 HTCInfoMax [101] 0.748 0.835 0.627 HiMatch [102] 0.754 0.847 0.641 SHO-LSTM [45] 0.737 0.913 0.781 SHO-CNN (Proposed) 0.722 0.906 0.776…”
Section: Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…Deep learning has become the AI/ML technique of choice in the last few years, due to its groundbreaking success in numerous real-world applications (Aghazadeh & Gharehchopogh, 2018;Asghari et al, 2021;Collobert & Weston, 2008;Khataei Maragheh et al, 2022;Krizhevsky et al, 2017;Lin et al, 2019;Niu et al, 2019). Four machine learning methods were adopted by Paulson et al (2020) to model the relationships between thermal histories and surface porosity formation in Ti6Al4V during single line scans using L-PBF.…”
Section: Artificial Intelligence Methodsmentioning
confidence: 99%