2007 IEEE Conference on Computer Vision and Pattern Recognition 2007
DOI: 10.1109/cvpr.2007.382989
|View full text |Cite
|
Sign up to set email alerts
|

Improved Video Registration using Non-Distinctive Local Image Features

Abstract: The task of registering video frames with a static model is a common problem in many computer vision domains. The standard approach to registration involves finding point correspondences between the video and the model and using those correspondences to numerically determine registration transforms. Current methods locate video-to-model point correspondences by assembling a set of reference images to represent the model and then detecting and matching invariant local image features between the video frames and… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
37
0

Year Published

2008
2008
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 51 publications
(37 citation statements)
references
References 11 publications
0
37
0
Order By: Relevance
“…This is because the additional reference points are farther from the embedded block and the local linearity cannot be assumed any more. If the number of reference points (10) 46 (31) 50 (50) 62 (41) 58 (33) 61 (30) 62 (51) 60 (49) 57 (41) Swirl (15) 43 (33) 48 (35) 61 (41) 52 (32) 59 (29) 61 (44) 55 (47) 54 (37) Swirl (20) 42 (35) Wave (5 × 256) 78% (44%) 70% (52%) 95% (52%) 89% (47%) 92% (44%) 95% (47%) 92% (52%) 87% (48%) Wave (10 × 256) 56% (50%) 63% (48%) 86% (52%) 77% (52%) 83% (47%) 84% (48%) 77% (50%) 75% (50%) Wave (20 × 256) 50% (55%) 52% (48%) 66% (50%) 66% (48%) 66% (50%) 67% (50%) 66% (53%) 66% (53%) Wave (5 × 512) 80% (64%) 67% (38%) 98% (48%) 94% (67%) 97% (61%) 97% (47%) 97% (48%) 90% (53%) Wave (10 × 512) 73% (53%) 61% (48%) 94% (48%) 88% (55%) 92% (52%) 92% (44%) 91% (45%) 84% (49%) Wave (20 × 512) 66% (47%) 56% (48%) 84% (50%) 80% (44%) 84% (48%) 78% (47%) 78% (47%) 75% (47%) Implode (0.05) 91% (50%) 80% (72%) 98% (80%) 97% (50%) 98% (77%) 98% (88%) 97% (83%) 94% (71%) Implode (0.1) 64% (38%) 77% (47%) 97% (80%) 91% (44%) 95% (55%) 97% (83%) 95% (78%) 88% (61%) Implode (0.2) 53% (50%) 77% (86%) 94% (75%) 78% (50%) 88% (47%) 94% (78%) 84% (75%) 87% (66%) Swirl (10) 72% (48%) 78% (78%) 97% (64%) 91% (52%) 95% (47%) 97% (80%) 94% (77%) 89% (64%) Swirl (15) 67% (52%) 75% (55%) 95% (64%) 81% (50%) 92% (45%) 95% (69%) 86% (73%) 84% (64%) Swirl (20) 66% (55%) 59% (52%) 92% (63%) 77% (48%) 86% (48%) 92% (69%) 84% (73%) 79% (58%) Seam carving (−20× − 61) 92% (97%) 63% (56%) 86% (47%) 77% (55%) 73% (50%) 80% (50%) 84% (50%) 79% (58%) Seam carving (−61× − 20) 88% (94%) 44% (63%) 83% (50%) 75% (53%) 75% (50%) 84% (48%) 75% (48%) 75% (58%) Average 71% (57%) 66% (56%) 90% (59%) 83% (51%) 89% (51%) 89% (60%) 86% (61%) 82% (56%) is small, the risk of failing to detect any of them becomes higher. On the other hand, if the number of reference points is large, it would result in larger database size.…”
Section: Methodsmentioning
confidence: 99%
See 3 more Smart Citations
“…This is because the additional reference points are farther from the embedded block and the local linearity cannot be assumed any more. If the number of reference points (10) 46 (31) 50 (50) 62 (41) 58 (33) 61 (30) 62 (51) 60 (49) 57 (41) Swirl (15) 43 (33) 48 (35) 61 (41) 52 (32) 59 (29) 61 (44) 55 (47) 54 (37) Swirl (20) 42 (35) Wave (5 × 256) 78% (44%) 70% (52%) 95% (52%) 89% (47%) 92% (44%) 95% (47%) 92% (52%) 87% (48%) Wave (10 × 256) 56% (50%) 63% (48%) 86% (52%) 77% (52%) 83% (47%) 84% (48%) 77% (50%) 75% (50%) Wave (20 × 256) 50% (55%) 52% (48%) 66% (50%) 66% (48%) 66% (50%) 67% (50%) 66% (53%) 66% (53%) Wave (5 × 512) 80% (64%) 67% (38%) 98% (48%) 94% (67%) 97% (61%) 97% (47%) 97% (48%) 90% (53%) Wave (10 × 512) 73% (53%) 61% (48%) 94% (48%) 88% (55%) 92% (52%) 92% (44%) 91% (45%) 84% (49%) Wave (20 × 512) 66% (47%) 56% (48%) 84% (50%) 80% (44%) 84% (48%) 78% (47%) 78% (47%) 75% (47%) Implode (0.05) 91% (50%) 80% (72%) 98% (80%) 97% (50%) 98% (77%) 98% (88%) 97% (83%) 94% (71%) Implode (0.1) 64% (38%) 77% (47%) 97% (80%) 91% (44%) 95% (55%) 97% (83%) 95% (78%) 88% (61%) Implode (0.2) 53% (50%) 77% (86%) 94% (75%) 78% (50%) 88% (47%) 94% (78%) 84% (75%) 87% (66%) Swirl (10) 72% (48%) 78% (78%) 97% (64%) 91% (52%) 95% (47%) 97% (80%) 94% (77%) 89% (64%) Swirl (15) 67% (52%) 75% (55%) 95% (64%) 81% (50%) 92% (45%) 95% (69%) 86% (73%) 84% (64%) Swirl (20) 66% (55%) 59% (52%) 92% (63%) 77% (48%) 86% (48%) 92% (69%) 84% (73%) 79% (58%) Seam carving (−20× − 61) 92% (97%) 63% (56%) 86% (47%) 77% (55%) 73% (50%) 80% (50%) 84% (50%) 79% (58%) Seam carving (−61× − 20) 88% (94%) 44% (63%) 83% (50%) 75% (53%) 75% (50%) 84% (48%) 75% (48%) 75% (58%) Average 71% (57%) 66% (56%) 90% (59%) 83% (51%) 89% (51%) 89% (60%) 86% (61%) 82% (56%) is small, the risk of failing to detect any of them becomes higher. On the other hand, if the number of reference points is large, it would result in larger database size.…”
Section: Methodsmentioning
confidence: 99%
“…The reference points for each block are searched by the SIFT matching. Then, the local area around the block is restored to its original size and shape using RANSAC-based method [20]. Even if the content is non-linearly geometrically distorted, linear geometrical distortion can be assumed in such a small local area.…”
Section: Detectionmentioning
confidence: 99%
See 2 more Smart Citations
“…Being rotation and scale invariant, such local features can be used to match images with large viewpoint changes, under analytic transformations such as affine or perspective, and with occlusions. Salzmann and Fua [24] also use such local features to find the point correspondences in the case of non-rigid deformation, but trustworthy local matches are sparse and spatial models have to be included to obtain denser correspondences [37,11].…”
Section: Related Workmentioning
confidence: 99%