2010 Fourth Pacific-Rim Symposium on Image and Video Technology 2010
DOI: 10.1109/psivt.2010.83
|View full text |Cite
|
Sign up to set email alerts
|

Moving Objects Detection and Tracking Framework for UAV-based Surveillance

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
15
0

Year Published

2013
2013
2024
2024

Publication Types

Select...
4
4
2

Relationship

0
10

Authors

Journals

citations
Cited by 50 publications
(15 citation statements)
references
References 11 publications
0
15
0
Order By: Relevance
“…[20][21][22] Segmentation techniques can be based on thresholding, 23,24 morphological operations, 25 edge detection, 15,26 or superpixels 27,28 in combination with connected component labeling while machine learning approaches use trained classifiers in a sliding-window framework [29][30][31] often only applied to independently moving image regions. [32][33][34] To further improve those methods, several approaches exist for spatial information fusion 15,26,31,35,36 and consideration of context knowledge, such as street networks or tracking statistics. 18,25,32,33,[37][38][39] Temporal information fusion, however, is often introduced by using single or multiple object tracking that is based on initial detections.…”
Section: Related Workmentioning
confidence: 99%
“…[20][21][22] Segmentation techniques can be based on thresholding, 23,24 morphological operations, 25 edge detection, 15,26 or superpixels 27,28 in combination with connected component labeling while machine learning approaches use trained classifiers in a sliding-window framework [29][30][31] often only applied to independently moving image regions. [32][33][34] To further improve those methods, several approaches exist for spatial information fusion 15,26,31,35,36 and consideration of context knowledge, such as street networks or tracking statistics. 18,25,32,33,[37][38][39] Temporal information fusion, however, is often introduced by using single or multiple object tracking that is based on initial detections.…”
Section: Related Workmentioning
confidence: 99%
“…This system runs fast but it cannot solve the complex scaling scenarios. Ibrahim et al [ 17 ] proposed the MODAT framework. Instead of Harris corner, they adopted SIFT (Scale-invariant feature transform) [ 18 ] features to fulfill the image matching.…”
Section: Related Workmentioning
confidence: 99%
“…In this study, the random sample consensus (RANSAC) algorithm, which uses homography as the geometric constraint model, is applied to remove the pair of mismatched keypoints. Homography is applied to the two images in case of translation, 3D rotation (roll, pitch, and yaw), and zoom transformation [20]. When translation, rotation, and zoom transformation occur in both the visible image and the database image acquired through the infrared camera, homography becomes the most suitable transformation for the geometric constraint model.…”
Section: Matching Refinementmentioning
confidence: 99%