Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval 2021
DOI: 10.1145/3404835.3462921
|View full text |Cite
|
Sign up to set email alerts
|

Semi-Supervised Variational Reasoning for Medical Dialogue Generation

Abstract: Medical dialogue generation aims to provide automatic and accurate responses to assist physicians to obtain diagnosis and treatment suggestions in an efficient manner. In medical dialogues two key characteristics are relevant for response generation: patient states (such as symptoms, medication) and physician actions (such as diagnosis, treatments). In medical scenarios large-scale human annotations are usually not available, due to the high costs and privacy requirements. Hence, current approaches to medical … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

2
23
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 30 publications
(25 citation statements)
references
References 50 publications
2
23
0
Order By: Relevance
“…MedDialog (Zeng et al, 2020) is a large scale medical dialogue dataset that contains a Chinese dataset with 3.4 million conversations covering 172 specialties of diseases and an English dataset with 0.26 million conversations covering 96 specialties of diseases. KaMed (Li et al, 2021) is a knowledge aware medical dialogue dataset that contains over 60,000 medical dialogue sessions and is equipped with external medical knowledge from Chinese medical knowledge platform. The tasks built on these corpus are usually response generation in dialogue systems, on which researchers can build automated medical chatbots.…”
Section: Related Workmentioning
confidence: 99%
“…MedDialog (Zeng et al, 2020) is a large scale medical dialogue dataset that contains a Chinese dataset with 3.4 million conversations covering 172 specialties of diseases and an English dataset with 0.26 million conversations covering 96 specialties of diseases. KaMed (Li et al, 2021) is a knowledge aware medical dialogue dataset that contains over 60,000 medical dialogue sessions and is equipped with external medical knowledge from Chinese medical knowledge platform. The tasks built on these corpus are usually response generation in dialogue systems, on which researchers can build automated medical chatbots.…”
Section: Related Workmentioning
confidence: 99%
“…Text generation (TG) automatically generates text with given inputs while pursuing the goal of appearing indistinguishable from human-written text. Specifically, there are 3 kinds of inputs and corresponding subtasks in smart healthcare: text inputs (e.g., routine reports) associated with text summarization [65]- [67], question generation [68]- [70], dialogue generation [71]- [73], and etc. ; data inputs (e.g., neonatal intensive care data) connected with data-to-text [74]; and image inputs (e.g., medical images) related to image captioning [75], [76], visual question answering (VQA) [77]- [79], and etc.…”
Section: Modellingmentioning
confidence: 99%
“…Medical dialogue system (MDS) has received much attention due to its high practical value. Previous works [5,15,21] usually model the dialogue history as sequential text and employ the sequenceto-sequence (Seq2Seq) models that built on large-scale pretrained text encoder and decoder to generate medical responses.…”
Section: Introductionmentioning
confidence: 99%
“…This issue may be caused by the fact that the cross-attention mechanism is not trained with explicit supervision signals when recalling pivotal information. Recent works [8,15,26,32] proposed to extract the medical key phrases and sentences from the dialogue history and incorporate them into response generation via the cross-attention mechanism as well. However, these works bypass the fundamental problem of utilizing medical relations between different utterances, and fail to fully exploit the pivotal information from dialogue history during response generation.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation