2022
DOI: 10.48550/arxiv.2208.07097
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Efficient Task-Oriented Dialogue Systems with Response Selection as an Auxiliary Task

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(4 citation statements)
references
References 0 publications
0
4
0
Order By: Relevance
“…There are several choices for the base model framework, like decoder-only GPT (Yang, Li, and Quan 2021;Peng et al 2021), encoder-decoder T5 (Su et al 2021;Bang, Lee, and Koo 2023), UniLM-based models (He et al 2022b,a), encoder-2decoders based models (Lee 2021; Cholakov and Kolev 2022). Considering that the belief generation depends more on understanding and summarization ability, while the policy and response generation relies more on generative ability to maintain contextual coherence.…”
Section: Model Frameworkmentioning
confidence: 99%
“…There are several choices for the base model framework, like decoder-only GPT (Yang, Li, and Quan 2021;Peng et al 2021), encoder-decoder T5 (Su et al 2021;Bang, Lee, and Koo 2023), UniLM-based models (He et al 2022b,a), encoder-2decoders based models (Lee 2021; Cholakov and Kolev 2022). Considering that the belief generation depends more on understanding and summarization ability, while the policy and response generation relies more on generative ability to maintain contextual coherence.…”
Section: Model Frameworkmentioning
confidence: 99%
“…We evaluate both end-to-end and policy optimization settings. This includes UBAR (Nekvinda and Dusek, 2021), PPTOD (Su et 2022), RSTOD (Cholakov and Kolev, 2022), BORT (Sun et al, 2022a), MTTOD (Lee, 2021), HDNO (Wang et al, 2020a), GALAXY , MarCO (Wang et al, 2020b), Mars (Sun et al, 2022b), and KRLS . To obtain database search results in the end-to-end setting, we use MTTOD's dialogue state tracker, which is trained jointly during fine-tuning.…”
Section: Experiments Setupmentioning
confidence: 99%
“…We evaluate both end-to-end and policy optimization settings. This includes UBAR (Nekvinda and Dusek, 2021), PPTOD (Su et 2022), RSTOD (Cholakov and Kolev, 2022), BORT (Sun et al, 2022a), MTTOD (Lee, 2021), HDNO (Wang et al, 2020a), GALAXY , MarCO (Wang et al, 2020b), Mars (Sun et al, 2022b), and KRLS . To obtain database search results in the end-to-end setting, we use MTTOD's dialogue state tracker, which is trained jointly during fine-tuning.…”
Section: Experiments Setupmentioning
confidence: 99%