2022
DOI: 10.1504/ijcat.2022.123237
|View full text |Cite
|
Sign up to set email alerts
|

SiNoptiC: swarm intelligence optimisation of convolutional neural network architectures for text classification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 8 publications
(4 citation statements)
references
References 0 publications
0
4
0
Order By: Relevance
“…In our experiments, the feature extraction used model is similar to the CNN model described in [3]. It consists of an embedding layer followed by a convolutional layer based on Rectified Linear Units (ReLU).…”
Section: Baseline Modelmentioning
confidence: 99%
See 1 more Smart Citation
“…In our experiments, the feature extraction used model is similar to the CNN model described in [3]. It consists of an embedding layer followed by a convolutional layer based on Rectified Linear Units (ReLU).…”
Section: Baseline Modelmentioning
confidence: 99%
“…Next, it has a max-pooling layer followed by a fully connected layer containing a Softmax. In comparison with the model described in [3], this model has fewer filters for time optimization reasons. The model has demonstrated impressive performance and is used in this study with three different classifiers including NB, Softmax, and LR.…”
Section: Baseline Modelmentioning
confidence: 99%
“…To address this issue, an innovative approach is proposed to bridge the gap between user queries and document vocabulary. One promising direction involves leveraging optimal deep learning architectures (Ferjani et al, 2022;Khoei et al, 2023) that offer a promising avenue for mitigating the impact of vocabulary differences in IR while enabling more accurate matching between user queries and document content. By learning representations that transcend individual terms, the proposed models effectively handle vocabulary mismatches and facilitate personalized information access.…”
Section: Introductionmentioning
confidence: 99%
“…• Recall is the ratio between the number of TP and the total number of positives, i.e., the fraction of relevant objects found in the recommendation list, i.e., the probability that a relevant object is recommended [37,38]. It is given by Equation (4).…”
mentioning
confidence: 99%