2012
DOI: 10.1016/j.patcog.2011.08.010
|View full text |Cite
|
Sign up to set email alerts
|

TED: A texture-edge descriptor for pedestrian detection in video sequences

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
22
0

Year Published

2014
2014
2019
2019

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 23 publications
(22 citation statements)
references
References 26 publications
0
22
0
Order By: Relevance
“…Most of the research works in motion segmentation have been attempted using the conventional background subtraction [5], statistical background subtraction [6][7][8][9][10], temporal differencing [11][12][13], optical flow [14] and hybrid [15][16][17][18][19][20][21] approaches.…”
Section: Literature Reviewmentioning
confidence: 99%
“…Most of the research works in motion segmentation have been attempted using the conventional background subtraction [5], statistical background subtraction [6][7][8][9][10], temporal differencing [11][12][13], optical flow [14] and hybrid [15][16][17][18][19][20][21] approaches.…”
Section: Literature Reviewmentioning
confidence: 99%
“…In our work, we obtained image features using TED [37], SUN [39], SURF [38], HOG [36], and Gist [27] image descriptors. Image feature dimension obtained through TED and SUN image descriptors are the same as that of the original image which we further reduced using I2A approach.…”
Section: Nearest Neighborhood Quality Measure and Optimal Image Reprementioning
confidence: 99%
“…We improved local neighborhood structure using maximum NNQ measure in order to achieve performance improvement for image datasets with significant within-class variation. We selected an optimal image descriptor among TED [37], SUN [39], SURF [38], HOG [36], and Gist [27]. For each image descriptor, we have shown simulation results of computing NNQ measure for each image dataset in Table 2 in which an overall NNQ measure on all image datasets is also shown.…”
Section: Nearest Neighborhood Quality Measure and Optimal Image Reprementioning
confidence: 99%
See 2 more Smart Citations