2016
DOI: 10.1016/j.image.2016.06.002
|View full text |Cite
|
Sign up to set email alerts
|

Robust object representation by boosting-like deep learning architecture

Abstract: Northumbria University has developed Northumbria Research Link (NRL) to enable users to access the University's research output. Copyright © and moral rights for items on NRL are retained by the individual author(s) and/or other copyright owners. Single copies of full items can be reproduced, displayed or performed, and given to third parties in any format or medium for personal research or study, educational, or not-for-profit purposes without prior permission or charge, provided the authors, title and full b… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2017
2017
2018
2018

Publication Types

Select...
8

Relationship

0
8

Authors

Journals

citations
Cited by 12 publications
(3 citation statements)
references
References 45 publications
0
3
0
Order By: Relevance
“…With the great success of CNNs on tasks such as image classification [22]- [25], natural language processing [26], [27], object detection [28]- [30], and image segmentation [31], [32], several trackers based on CNNs have been proposed. Fan et al [33] pre-train a network using the location and the appearance information of the object of interest to extract both spatial and temporal features.…”
Section: Related Workmentioning
confidence: 99%
“…With the great success of CNNs on tasks such as image classification [22]- [25], natural language processing [26], [27], object detection [28]- [30], and image segmentation [31], [32], several trackers based on CNNs have been proposed. Fan et al [33] pre-train a network using the location and the appearance information of the object of interest to extract both spatial and temporal features.…”
Section: Related Workmentioning
confidence: 99%
“…Assume there are L layers in a CNN model, and k feature maps in the l th layer, where l = 1, 2, ……, L . At a certain layer, the previous layer’s feature maps are convolved with the learning filter kernel and put through the activation function F (·) to output the feature maps [ 25 ]. Thus, the k th feature map of the l th layer can be computed as: where is n th feature map in ( l − 1 )th layer, * represents the convolutional operation, is an additive bias of n th feature map in l th layer, I k denotes all the input convolved images of the k th feature map.…”
Section: Feature Extraction Based On Convolutional Neural Networkmentioning
confidence: 99%
“…Gupta et al [ 23 ] developed a Fully-Convolutional Regression Network (FCRN) trained with synthetic images which performs both text detection and bounding box regression. A robust object representation which is a fusion of handcraft features and deep learned features is proposed in [ 44 ].…”
Section: Previous Workmentioning
confidence: 99%