2022
DOI: 10.1155/2022/7839840
|View full text |Cite
|
Sign up to set email alerts
|

RLAS-BIABC: A Reinforcement Learning-Based Answer Selection Using the BERT Model Boosted by an Improved ABC Algorithm

Abstract: Answer selection (AS) is a critical subtask of the open-domain question answering (QA) problem. The present paper proposes a method called RLAS-BIABC for AS, which is established on attention mechanism-based long short-term memory (LSTM) and the bidirectional encoder representations from transformers (BERT) word embedding, enriched by an improved artificial bee colony (ABC) algorithm for pretraining and a reinforcement learning-based algorithm for training backpropagation (BP) algorithm. BERT can be comprised … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
7
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 24 publications
(7 citation statements)
references
References 111 publications
0
7
0
Order By: Relevance
“…Determining the ideal value of λ hinges on the relative frequency of majority and minority samples, emphasizing the significance of meticulous parameter adjustments to attain optimal result. 2) Impact of the loss function: A wide array of strategies is available to tackle data imbalances within the realm of machine learning [42]. These encompass enhancements in data augmentation methodologies as well as the careful selection of an appropriate loss function.…”
Section: Resultsmentioning
confidence: 99%
“…Determining the ideal value of λ hinges on the relative frequency of majority and minority samples, emphasizing the significance of meticulous parameter adjustments to attain optimal result. 2) Impact of the loss function: A wide array of strategies is available to tackle data imbalances within the realm of machine learning [42]. These encompass enhancements in data augmentation methodologies as well as the careful selection of an appropriate loss function.…”
Section: Resultsmentioning
confidence: 99%
“…Should a particular food source become depleted or no longer viable, the employed bee associated with that source undergoes a transformation. This bee becomes a scout, embarking on a random search for new and potentially more lucrative food sources [30]. This aspect of the algorithm exemplifies a dynamic optimization process, mirroring the adaptive and efficient foraging strategies of real-world honey bees.…”
Section: A Artificial Bee Colony Methodsmentioning
confidence: 99%
“…In [28] and [29], effective training of the weights of neural networks was achieved using a differential evolution-based strategy and Artificial Bee Colony (ABC), respectively. The ABC algorithm can be improved by the mutual learning-based ABC [30], which changes the algorithm to use mutual learning between two selected position parameters instead of choosing the candidate food source with the highest fitness [31].…”
Section: Introductionmentioning
confidence: 99%
“…Deep learning models have achieved impressive results in various fields, from natural language processing to medical image analysis , 2022, Bahadori et al 2023. These models depend on fine-tuning weights to closely align predicted results with actual data , Gharagozlou et al 2022. The process of adjusting these weights often involves gradient-based backpropagation methods (Moravvej et al 2023).…”
Section: Introductionmentioning
confidence: 99%