Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence 2022
DOI: 10.24963/ijcai.2022/818
|View full text |Cite
|
Sign up to set email alerts
|

Controllable Text Generation for Open-Domain Creativity and Fairness

Abstract: The demand for machine learning systems that can provide both transparency and fairness is constantly growing. Since the concept of fairness depends on the context, studies in the literature have proposed various formalisation and mitigation strategies. In this work, we propose a novel, flexible, discrimination-aware classifier that allows the user to: (i) select and mitigate the desired fairness criterion from a set of available options; (ii) implement more than one fairness criterion; (iii) handle more than … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
14
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(14 citation statements)
references
References 6 publications
0
14
0
Order By: Relevance
“…We compare the performance of our method with other baselines, including fine-tuning methods BART (Lin et al, 2020) and T5-Large (Lin et al, 2020), auxilary guiding model method NADO (Meng et al, 2022), prompting method NRP (Carlsson et al, 2022), and 8-shot pure natural language instruction (NLI) on GPT-3.5, which is shown at Table 3a.…”
Section: Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…We compare the performance of our method with other baselines, including fine-tuning methods BART (Lin et al, 2020) and T5-Large (Lin et al, 2020), auxilary guiding model method NADO (Meng et al, 2022), prompting method NRP (Carlsson et al, 2022), and 8-shot pure natural language instruction (NLI) on GPT-3.5, which is shown at Table 3a.…”
Section: Resultsmentioning
confidence: 99%
“…Constrainted Beam Search (Anderson et al, 2017), DeLorean (Qin et al, 2020), COLD , Neuro-Logic (Lu et al, 2021); or using auxilary guiding model, e.g. PPLM (Anderson et al, 2017), GeDI (Krause et al, 2021), FUDGE (Yang and Klein, 2021), CTRLsum (He et al, 2022), Plug-and-Play Content Planning , NADO (Meng et al, 2022), and MACSum (Zhang et al, 2023) .…”
Section: Methods Of Controllable Text Generationmentioning
confidence: 99%
See 1 more Smart Citation
“…How to Improve. We encourage future work to explore from two different directions: 1) chain/tree/graph-of-thought reasoning, and 2) bridging LLMs with non-autoregressive generation abilities (e.g., NADO (Meng et al, 2022)). For the first one, one can try both simple chain/tree/graphof-thought prompting or even pretrained LLMs with chain-of-thought/scratchpad pairs, as it shows promises for mathematical reasoning .…”
Section: Controlled Paraphrase Generationmentioning
confidence: 99%
“…In this work, we propose BOOST, a framework to boost the commonsense of PLTMs' generation in a plug-and-play manner (Figure 2), which is inspired by the recent development of controllable generation to use a small auxiliary model to control a PTLM by training on its self-generated samples (Meng et al, 2022). Specifically, to better integrate commonsense knowledge, we first build a scorer that evaluates how commonsensical a sentence is.…”
Section: Introductionmentioning
confidence: 99%