2022
DOI: 10.1016/j.ins.2021.12.076
|View full text |Cite
|
Sign up to set email alerts
|

ReCom: A deep reinforcement learning approach for semi-supervised tabular data labeling

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
0
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6

Relationship

0
6

Authors

Journals

citations
Cited by 8 publications
(1 citation statement)
references
References 22 publications
0
0
0
Order By: Relevance
“…However, one limitation of many of these methods is that they often require manual adjustment of parameters, which can make them less effective and limit their ability to generalize well. Additionally, while RLMDP offers resource efficiency advantages, it struggles to efficiently utilize all empirical data [27][28][29]. To overcome this limitation, the Prioritized Experience Replay (PER) sampling technique has been introduced and integrated with DQN and DDQN methods, resulting in DQN-PER and DDQN-PER approaches [8,9,30].…”
Section: Introductionmentioning
confidence: 99%
“…However, one limitation of many of these methods is that they often require manual adjustment of parameters, which can make them less effective and limit their ability to generalize well. Additionally, while RLMDP offers resource efficiency advantages, it struggles to efficiently utilize all empirical data [27][28][29]. To overcome this limitation, the Prioritized Experience Replay (PER) sampling technique has been introduced and integrated with DQN and DDQN methods, resulting in DQN-PER and DDQN-PER approaches [8,9,30].…”
Section: Introductionmentioning
confidence: 99%