2023 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) 2023
DOI: 10.1109/wacv56688.2023.00206
|View full text |Cite
|
Sign up to set email alerts
|

M-FUSE: Multi-frame Fusion for Scene Flow Estimation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
13
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
3

Relationship

0
6

Authors

Journals

citations
Cited by 12 publications
(13 citation statements)
references
References 38 publications
0
13
0
Order By: Relevance
“…Recently, deep learning has demonstrated powerful capabilities in end-to-end learning of scene flow estimation from stereo inputs [24,32,41]. Additionally, approaches that leverage pre-existing 3D structure through inputs of RGB-D sequences [31,39,45,33] or Lidar points [28,56,38,55,12,11,52] have also been proposed for various scenarios. Monocular scene flow.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…Recently, deep learning has demonstrated powerful capabilities in end-to-end learning of scene flow estimation from stereo inputs [24,32,41]. Additionally, approaches that leverage pre-existing 3D structure through inputs of RGB-D sequences [31,39,45,33] or Lidar points [28,56,38,55,12,11,52] have also been proposed for various scenarios. Monocular scene flow.…”
Section: Related Workmentioning
confidence: 99%
“…To enable self-supervised training, the estimated depth D 1 of the first image and the SE3 motion field T 1→2 are first converted into the scene flow representation (u, v, ∆D) with known camera intrinsics [33], where (u, v) denotes the standard optical flow F 1→2 , and ∆D denotes the depth change registered to the first frame I 1 . We denote D 1 = D 1 +∆D, which represents the transformed depth map registered to the first frame.…”
Section: Self-supervised Lossmentioning
confidence: 99%
See 1 more Smart Citation
“…However, none of these studies exploit the useful temporal information from previous point cloud frames. Extensive studies on optical flow estimation [16], [20], [22], [42], [50], [52], [71], [86] and (a) have shown that scene flow in consecutive frames are similar to each other (i.e., the upper left color wheel represents the flow magnitude and direction). To this end, an intuitive approach for exploiting temporal information, namely Joint, is to force a single FNSF to jointly estimate the previous flow (t-1 → t) and the current flow (t → t+1).…”
Section: Introductionmentioning
confidence: 99%
“…lizing such valuable temporal information for improving the two-frame point cloud scene flow estimations. Such a gap is particularly unexpected, because the extensive body of research in optical flow estimation [20], [22], [42], [50], [52], [71], [86] have shown the importance of temporal information from previous frames, even amidst rapid motion changes in optical flow. For instance, as illustrated in Figure 1(a), it is evident that flows between consecutive frames bear a significant resemblance to each other, underscoring the potential benefits of integrating temporal insights into scene flow estimation for two-frame point clouds.…”
Section: Introductionmentioning
confidence: 99%