2022
DOI: 10.1007/978-981-16-7952-0_22
|View full text |Cite
|
Sign up to set email alerts
|

Resume Classification Using Bidirectional LSTM and Attention Mechanism

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
1
1
1
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(3 citation statements)
references
References 4 publications
0
1
0
Order By: Relevance
“…Substituting the random forest classifier with logistic regression resulted in a drop in accuracy by 1-2%. We also compared our results with that of GloVe and Word2Vec word embeddings using BiLSTM with and without attention mechanism [6,12,7,25,37]; the attention mechanism has known to boost the accuracies of sequence learning models in the past.…”
Section: Discussion On Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…Substituting the random forest classifier with logistic regression resulted in a drop in accuracy by 1-2%. We also compared our results with that of GloVe and Word2Vec word embeddings using BiLSTM with and without attention mechanism [6,12,7,25,37]; the attention mechanism has known to boost the accuracies of sequence learning models in the past.…”
Section: Discussion On Resultsmentioning
confidence: 99%
“…Several researchers have explored word embeddings for resume classification. Some notable and distinctive works in this direction are: -(a) GloVe word embeddings with convolutional neural networks [24] (b) Word2Vec word embeddings with Bidirectional LSTM with attention [25] (c) GloVe word embeddings with deep neural networks [26] (d) GloVe word embeddings with graph neural networks [27]. In [28], Zu and Wang claimed that the combination of Bidirectional LSTM with convolutional neural network and conditional random fields is the best classifier to classify word embedding sequences extracted from text blocks in a resume.…”
Section: Related Workmentioning
confidence: 99%
“…It uses convolutional neural networks (CNN) and recurrent neural networks (RNN) to extract features from CVs and classify them into speci c categories. [7] and [8] has explored different approaches and techniques to get the most out of this model. Researchers have proposed model architectures speci cally tailored to BERT, using additional layers for classi cation, or in combination with other machine learning methods such as convolutional neural networks (CNN) or recurrent neural networks (RNN).…”
Section: Related Workmentioning
confidence: 99%