Procedings of the British Machine Vision Conference 2005 2005
DOI: 10.5244/c.19.31
|View full text |Cite
|
Sign up to set email alerts
|

Colour Constrained 4D Flow

Abstract: The addition of colour information to the computation of range/scene flow is proposed to improve its accuracy and robustness to ambiguities. This is applied in the form of additional optical flow constraints from aligned colour image data. Combining constraints gives improved velocity displacement fields for both synthetic and real datasets over using depth alone, or in using depth plus intensity. This ultimately has benefits for the processing of dense, temporal depth data obtainable from novel video-rate 3D … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
8
0

Year Published

2006
2006
2014
2014

Publication Types

Select...
3
3
1

Relationship

0
7

Authors

Journals

citations
Cited by 12 publications
(8 citation statements)
references
References 11 publications
0
8
0
Order By: Relevance
“…In our approach, instead of tracking 3D points we use a Lucas-Kanade tracking framework [2] to directly calculate the scene flow of small surface patches. As in [33], we assume that the scene is composed of independently, but rigidly moving 3D parts, avoiding smoothness constraints in the image domain used in [29,21]. Unlike previous Lucas-Kanade methods that use a 2D warping [2], we model the image flow as a function of the 3D motion field with help from a depth sensor, improving the accuracy of the optical flow and solving directly for the scene flow.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…In our approach, instead of tracking 3D points we use a Lucas-Kanade tracking framework [2] to directly calculate the scene flow of small surface patches. As in [33], we assume that the scene is composed of independently, but rigidly moving 3D parts, avoiding smoothness constraints in the image domain used in [29,21]. Unlike previous Lucas-Kanade methods that use a 2D warping [2], we model the image flow as a function of the 3D motion field with help from a depth sensor, improving the accuracy of the optical flow and solving directly for the scene flow.…”
Section: Related Workmentioning
confidence: 99%
“…Spies et al [29] estimate the scene flow by simultaneously constraining the flow in intensity and depth images of an orthographically captured surface. Lukins and Fisher [21] extend this approach to multiple color channels and an aligned depth image. Letouzey et al [18] directly estimate the 3D motion field using photometric constraints and a global regularization term.…”
Section: Related Workmentioning
confidence: 99%
“…Spies et al [11] solve for optical and range flows: In that work depth data is used as an extra channel and the classical optical flow equation is adapted to constrain the observed depth data. Lukins and Fisher [12] extend this approach to multiple color channels and one aligned depth image. In both approaches the 3D motion field is computed by constraining the flow in intensity and depth images of an orthographically captured surface, so that the range flow is not used to support the 2D motion estimation.…”
Section: Related Workmentioning
confidence: 99%
“…However, the extension of these techniques into 3D (for example via scene and range-flow [16]) currently serve to only capture localised changes. In this respect, hybrid flow based techniques [13], as well as harmonic maps [17], can be employed to help constrain the newly emerging 4D capture technologies for space-time reconstruction [19]. This has lead to the possibility of video-rate capture of range data for new commercial and academic systems [18].…”
Section: Capturing Deformationmentioning
confidence: 99%