Findings of the Association for Computational Linguistics: EMNLP 2020 2020
DOI: 10.18653/v1/2020.findings-emnlp.75
|View full text |Cite
|
Sign up to set email alerts
|

Actor-Double-Critic: Incorporating Model-Based Critic for Task-Oriented Dialogue Systems

Abstract: In order to improve the sample-efficiency of deep reinforcement learning (DRL), we implemented imagination augmented agent (I2A) in spoken dialogue systems (SDS). Although I2A achieves a higher success rate than baselines by augmenting predicted future into a policy network, its complicated architecture introduces unwanted instability. In this work, we propose actor-double-critic (ADC) to improve the stability and overall performance of I2A. ADC simplifies the architecture of I2A to reduce excessive parameters… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(1 citation statement)
references
References 29 publications
0
1
0
Order By: Relevance
“…Model-free reinforcement learning methods interact directly with pre-built environments or real users to learn dialogue policies [4]. Model-based reinforcement learning is comprised of two simultaneous learning modules: model learning and policy learning [5].…”
Section: Related Work 21 Task-oriented Dialogue Systemsmentioning
confidence: 99%
“…Model-free reinforcement learning methods interact directly with pre-built environments or real users to learn dialogue policies [4]. Model-based reinforcement learning is comprised of two simultaneous learning modules: model learning and policy learning [5].…”
Section: Related Work 21 Task-oriented Dialogue Systemsmentioning
confidence: 99%