2019
DOI: 10.1109/tcsvt.2018.2848543
|View full text |Cite
|
Sign up to set email alerts
|

Fusing Object Semantics and Deep Appearance Features for Scene Recognition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
25
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 44 publications
(26 citation statements)
references
References 44 publications
1
25
0
Order By: Relevance
“…All the methods listed in Table I use CNN: Adi-Red [13], CCM [9], CNN-SMN [14], and SOSF + DFA + GAF [20] used information of the objects which appear in scene images. To obtain the object information, they used the CNN pre-trained on the object recognition dataset.…”
Section: Experimental Results On the Placesmentioning
confidence: 99%
See 3 more Smart Citations
“…All the methods listed in Table I use CNN: Adi-Red [13], CCM [9], CNN-SMN [14], and SOSF + DFA + GAF [20] used information of the objects which appear in scene images. To obtain the object information, they used the CNN pre-trained on the object recognition dataset.…”
Section: Experimental Results On the Placesmentioning
confidence: 99%
“…When a mixed CCM-CCG model is used, our FOSNet achieves state-of-the-art accuracy of 60.14% on the Places 2, and it is the first time that the accuracy exceeds 60% on the dataset. [1] 56.2 Gaze Shifting-CNN+SVM [19] 56.2 MetaObject-CNN [15] 58.11 Places365-VGG-SVM [28] 63.24 Three [5] 70.17 Hybrid CNN [21] 70.69 Sparse Representation [23] 71.08 Multi-Resolution CNNs [7] 72.0 CNN-SMN [14] 72.6 PatchNet [22] 73.0 SDO [6] 73.41 Adi-Red [13] 73.59 SOSF+CFA+GAF [20] 78…”
Section: Experimental Results On the Placesmentioning
confidence: 99%
See 2 more Smart Citations
“…Table 5 compares PulseNetOne to the related work in this area, and it can be seen that both networks pruned by PulseNetOne outperform the state-of-the-art by over 6%. FOSNet CCG [30] and SOSF+CFA+GAF [50] were the current best published results on the MIT67 dataset, achieving 90.37% and 89.51% respectively, but were significantly beaten by PulseNetOne. Figure 4 shows that AlexNet almost achieved its theoretical performance on all experiments except for CPU inference timing, while the pruned network was approximately 3× faster than the original network.…”
Section: Methods Year Accuracymentioning
confidence: 90%