2022
DOI: 10.48550/arxiv.2205.12688
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

ProsocialDialog: A Prosocial Backbone for Conversational Agents

Abstract: Most existing dialogue systems fail to respond properly to potentially unsafe user utterances by either ignoring or passively agreeing with them. To address this issue, we introduce PROSOCIALDIALOG, the first large-scale multi-turn dialogue dataset to teach conversational agents to respond to problematic content following social norms. Covering diverse unethical, problematic, biased, and toxic situations, PROSOCIALDIALOG contains responses that encourage prosocial behavior, grounded in commonsense social rules… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
10
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(10 citation statements)
references
References 31 publications
0
10
0
Order By: Relevance
“…Social Norms Domain. PROSOCIALDIALOG (Kim et al, 2022) is a multiturn English conversation dataset that instructs models to respond to problematic inputs according to human social norms. The dataset covers various unethical, problematic, biased, and harmful scenarios, created using a human-machine collaboration framework.…”
Section: Other Domainsmentioning
confidence: 99%
“…Social Norms Domain. PROSOCIALDIALOG (Kim et al, 2022) is a multiturn English conversation dataset that instructs models to respond to problematic inputs according to human social norms. The dataset covers various unethical, problematic, biased, and harmful scenarios, created using a human-machine collaboration framework.…”
Section: Other Domainsmentioning
confidence: 99%
“…It represents the use case where evaluation is dynamic (1) the model performance is easily swayed by human responses and can hardly be measured on benchmark datasets (Li et al, 2021), ( 2) the model has to balance multiple criteria like interestingness, informativeness, etc. which could be subjective for different user groups (Thoppilan et al, 2022), (3) it is essential to implement fallback options (e.g., responses like "sorry I didn't understand" that's built around the model at the UI level) when the model does not behave as expected or safety modules when there is potential for controversiality (Kim et al, 2022). These properties also make dialog systems an ideal testbed for discussing UI designs ( §2.3) and personalization ( §2.4).…”
Section: Walkthrough Case Studiesmentioning
confidence: 99%
“…Other approaches include strategies for responding to problematic contexts, such as steering away from toxicity (Baheti et al, 2021;Arora et al, 2022), using apologies (Ung et al, 2022), and non-sequiturs (Xu et al, 2021b). Our work is closely related to a study that proposed ProSocial dialog, a dataset where speakers disagree with unethical and toxic contexts using safety labels and social norms (Kim et al, 2022a). Using guidelines allows for more fine-grained control by specifying the contexts they are relevant to, and can provide more informative responses.…”
Section: Related Workmentioning
confidence: 99%
“…We use Blenderbot (Roller et al, 2021) to generate 3 additional responses for each context, creating a set of four responses including the original response from the dataset, denoted as R b , which is used in tasks A) and C) below. DI-ALGUIDE-SAFETY consists of data for the safety domain, where we augment conversations from the ProsocialDialog (Kim et al, 2022a) dataset. A) Guideline writing task.…”
Section: Response Entailment and Selectionmentioning
confidence: 99%
See 1 more Smart Citation