2019
DOI: 10.1007/978-3-030-11012-3_48
|View full text |Cite
|
Sign up to set email alerts
|

Deep Transfer Learning for Art Classification Problems

Abstract: In this paper we investigate whether Deep Convolutional Neural Networks (DCNNs), which have obtained state of the art results on the ImageNet challenge, are able to perform equally well on three different art classification problems. In particular, we assess whether it is beneficial to fine tune the networks instead of just using them as off the shelf feature extractors for a separately trained softmax classifier. Our experiments show how the first approach yields significantly better results and allows the DC… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
47
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
3
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 47 publications
(47 citation statements)
references
References 27 publications
(31 reference statements)
0
47
0
Order By: Relevance
“…Inductive TL [ 18 ] is the branch of TL related to problems where we have datasets of input-output pairs in both source and target domains, and where and . Typically, inductive TL is used to repurpose well known, efficient machine learning models trained on large datasets to related problems with smaller training datasets [ 19 , 21 ].…”
Section: Transfer Learning For Water Segmentationmentioning
confidence: 99%
“…Inductive TL [ 18 ] is the branch of TL related to problems where we have datasets of input-output pairs in both source and target domains, and where and . Typically, inductive TL is used to repurpose well known, efficient machine learning models trained on large datasets to related problems with smaller training datasets [ 19 , 21 ].…”
Section: Transfer Learning For Water Segmentationmentioning
confidence: 99%
“…The evaluation based on transfer learning mainly refers to the transfer of natural image related algorithms and knowledge into the aesthetic assessment of artistic images. Sabatelli et al [92] used the VisualBackProp visualization method [93] to observe the deep neural network before and after fine-tuning weights, which shows that the network activation area before fine-tuning is mainly concentrated in the location indicated the object type, and the active area is then moved into the position that is more related to the specific task in art paintings after fine-tuning. Li et al [78] applied deep convolutional features in the classification and evaluation of sketch copy works, which were collected in the teaching scenario including 25 types of works (such as apple, ceramic pot, glass, cube, molectron, etc.…”
Section: A Multi-task Probabilistic Frameworkmentioning
confidence: 99%
“…networks are trained on small datasets composed of 100-300 training images (Lopez-Fuentes et al, 2017;Steccanella et al, 2018;Moy de Vitry et al, 2019) while more popular problems can be trained on datasets composed of more than 15000 images (e.g, Caesar et al (2018); Zhou et al (2017)). In many cases, using inductive TL approaches for the training of CNNs instead of training them from scratch, with randomly initialised weights, allows improvement in the network performance (Reyes et al, 2015;Sabatelli et al, 2018).…”
Section: Transfer Learningmentioning
confidence: 99%