2020
DOI: 10.1007/978-3-030-52152-3_3
|View full text |Cite
|
Sign up to set email alerts
|

Artificial Creativity Augmentation

Abstract: Creativity has been associated with multifarious descriptions whereby one exemplary common definition depicts creativity as the generation of ideas that are perceived as both novel and useful within a certain social context. In the face of adversarial conditions taking the form of global societal challenges from climate change over AI risks to technological unemployment, this paper motivates future research on artificial creativity augmentation (ACA) to indirectly support the generation of requisite defense st… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
7
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
3
2
1

Relationship

4
2

Authors

Journals

citations
Cited by 6 publications
(7 citation statements)
references
References 52 publications
0
7
0
Order By: Relevance
“…After having introduced a broad variety of near-term guidelines for future AI observatory endeavors based on the exemplified systematic factual and counterfactual retrospective analyses, we provide a differentiated more general outlook on explicitly long-term AI safety directions. For this purpose, we select two recent theoretical AI safety paradigms: on the one hand a direction that has been termed artificial stupidity (AS) (see [196][197][198]) and on the other hand, a direction that we succinctly call eternal creativity (EC) stemming from recent work [13,16,199]. Thereby, note that these two paradigms are by no means postulated to represent the full panoply of nuances and views across the entirety of the young AI safety field.…”
Section: Long-term Directions and Future-oriented Contradistinctionsmentioning
confidence: 99%
See 3 more Smart Citations
“…After having introduced a broad variety of near-term guidelines for future AI observatory endeavors based on the exemplified systematic factual and counterfactual retrospective analyses, we provide a differentiated more general outlook on explicitly long-term AI safety directions. For this purpose, we select two recent theoretical AI safety paradigms: on the one hand a direction that has been termed artificial stupidity (AS) (see [196][197][198]) and on the other hand, a direction that we succinctly call eternal creativity (EC) stemming from recent work [13,16,199]. Thereby, note that these two paradigms are by no means postulated to represent the full panoply of nuances and views across the entirety of the young AI safety field.…”
Section: Long-term Directions and Future-oriented Contradistinctionsmentioning
confidence: 99%
“…Given Type-II-system-defined cognitive-affective goal settings, a systematic function integration can yield complementary synergies. Notably, EC recommends research on substrate-independent functional artificial creativity augmentation [199] (artificially augmenting human creativity and augmenting artificial creativity). For instance, active inference could technically increase Type I AI exploratory abilities [215,216].…”
mentioning
confidence: 99%
See 2 more Smart Citations
“…The neologistic term of artificial creativity augmentation [15] is deliberately ambiguous and refers to two distinct research directions: artificially augmenting anthropic creativity and augmenting artificial creativity. In short, Chapter 11 suggested that scientifically grounded research on augmenting human creativity, augmenting the yet primitive creativity in Type I AI or implementing Type II AI could represent valid strategies to indirectly tackle global challenges and identify requisite variety (also for AI safety).…”
Section: Artificial Creativity Augmentation Researchmentioning
confidence: 99%