2018
DOI: 10.3390/s18103400
|View full text |Cite
|
Sign up to set email alerts
|

NCA-Net for Tracking Multiple Objects across Multiple Cameras

Abstract: Tracking multiple pedestrians across multi-camera scenarios is an important part of intelligent video surveillance and has great potential application for public security, which has been an attractive topic in the literature in recent years. In most previous methods, artificial features such as color histograms, HOG descriptors and Haar-like feature were adopted to associate objects among different cameras. But there are still many challenges caused by low resolution, variation of illumination, complex backgro… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2019
2019
2020
2020

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(1 citation statement)
references
References 43 publications
0
1
0
Order By: Relevance
“…A convolution neural network with the loss function similar to neighborhood components analysis (NCA-net) had been proposed to tackle challenges of multiple objects across multiple cameras. Such challenges are caused by low resolution, variation of illumination, complex backgrounds and posture change [20]. Integration of different computer vision and pattern recognition in surveillance done by multiple camera’s using computer vision techniques is presented by Wang et al [21].…”
Section: Introductionmentioning
confidence: 99%
“…A convolution neural network with the loss function similar to neighborhood components analysis (NCA-net) had been proposed to tackle challenges of multiple objects across multiple cameras. Such challenges are caused by low resolution, variation of illumination, complex backgrounds and posture change [20]. Integration of different computer vision and pattern recognition in surveillance done by multiple camera’s using computer vision techniques is presented by Wang et al [21].…”
Section: Introductionmentioning
confidence: 99%