2018 IEEE Punecon 2018
DOI: 10.1109/punecon.2018.8745417
|View full text |Cite
|
Sign up to set email alerts
|

A Review of Deep Learning Models for Computer Vision

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 9 publications
(8 citation statements)
references
References 13 publications
0
7
0
Order By: Relevance
“…Our system also differs from our competitors that use contrastive learning (Islam et al, 2021;Feng et al, 2021), that utilizes the similarity and differences in data to improve model training efficiency, to train all of their model parameters. Contrasting our competitors, our classifier backbone consist of pretrained models (Canziani et al, 2016;Shah and Harpale, 2018) that were trained on another task with its head finetuned for paintings classification.…”
Section: Model Agnostic Data Augmentationmentioning
confidence: 99%
See 2 more Smart Citations
“…Our system also differs from our competitors that use contrastive learning (Islam et al, 2021;Feng et al, 2021), that utilizes the similarity and differences in data to improve model training efficiency, to train all of their model parameters. Contrasting our competitors, our classifier backbone consist of pretrained models (Canziani et al, 2016;Shah and Harpale, 2018) that were trained on another task with its head finetuned for paintings classification.…”
Section: Model Agnostic Data Augmentationmentioning
confidence: 99%
“…The classifier, depicted in Figure 5, is made from a pre-trained image classification model like VGG-16 and ResNet-50 (Canziani et al, 2016;Shah and Harpale, 2018) followed by extracting the very first layer and selecting 3 layers between the first and last layers to correspond to features with more spatial information to represent richer features and create a balance between the style and content information's contribution to the classification loss. The spatial attention module takes the re-projected layer for computing attention with the global feature from the bottle neck.…”
Section: Spatial Attention Based Image Classifiermentioning
confidence: 99%
See 1 more Smart Citation
“…In recent years, with the participation of giant technology companies in this competition, its popularity is increasing day by day. ILSVRC competition contributes both to the rapid advancement of the latest technology for computer vision tasks and to the development of general innovations in the architecture of CNN models [4], [11].…”
Section: Deep Learning Modelsmentioning
confidence: 99%
“…Determining and classifying the coordinates of the classes in the images is important at this point. The task of detecting and tracking people and vehicles from the images obtained by drone can be overcome with the deep learning algorithms that have recently been presented [4].…”
Section: Introductionmentioning
confidence: 99%