2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2021
DOI: 10.1109/cvpr46437.2021.01140
|View full text |Cite
|
Sign up to set email alerts
|

ArtEmis: Affective Language for Visual Art

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
81
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 89 publications
(81 citation statements)
references
References 47 publications
0
81
0
Order By: Relevance
“…of fine-art [32]. A related field models image aesthetics using votes on social media [23], and recently the link between style and emotion is explored [1,24]. Generative adversarial networks (GAN) such as cycle-consistent GAN [40] have been trained to map images from one domain to another, including between styles, and require labelled sets of (unpaired) images.…”
Section: Related Workmentioning
confidence: 99%
See 3 more Smart Citations
“…of fine-art [32]. A related field models image aesthetics using votes on social media [23], and recently the link between style and emotion is explored [1,24]. Generative adversarial networks (GAN) such as cycle-consistent GAN [40] have been trained to map images from one domain to another, including between styles, and require labelled sets of (unpaired) images.…”
Section: Related Workmentioning
confidence: 99%
“…The consensus was determined using a graphbased vote pooling method in which edges coded by an affinity matrix A i,j reflecting the number of times both images {i, j} were simultaneously selected within an annotated cluster/moodboard. Thresholding A i,j at a given consensus level C N = [1,5] partitions the group into subgroups. Fig.…”
Section: Cleaning Style Groups (Bam-fg-c N )mentioning
confidence: 99%
See 2 more Smart Citations
“…This is exacerbated when we wish to be able to specify multiple conditions, as there are even fewer training images available for each combination of conditions. We train our GAN using an enriched version of the ArtEmis dataset by Achlioptas et al [4] and investigate the effect of multiconditional labels. Two example images produced by our models can be seen in Fig.…”
Section: Introductionmentioning
confidence: 99%