2020
DOI: 10.36227/techrxiv.13186688
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

CoT-AMFlow: Adaptive Modulation Network with Co-Teaching Strategy for Unsupervised Optical Flow Estimation

Hengli Wang,
Rui Fan,
Ming Liu

Abstract: The interpretation of ego motion and scene change is a fundamental task for mobile robots. Optical flow information can be employed to estimate motion in the surroundings. Recently, unsupervised optical flow estimation has become a research hotspot. However, unsupervised approaches are often easy to be unreliable on partially occluded or texture-less regions. To deal with this problem, we propose CoT-AMFlow in this paper, an unsupervised optical flow estimation approach. In terms of the network architecture, … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
8
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
3
2

Relationship

3
2

Authors

Journals

citations
Cited by 6 publications
(8 citation statements)
references
References 7 publications
0
8
0
Order By: Relevance
“…In order to extrapolate the 3D information from a given driving scene, images from multiple views are required [27]. These images can be captured using either a single moving camera [28] or an array of synchronized cameras, as shown in Fig. 2.…”
Section: Autonomous Car Perceptionmentioning
confidence: 99%
See 2 more Smart Citations
“…In order to extrapolate the 3D information from a given driving scene, images from multiple views are required [27]. These images can be captured using either a single moving camera [28] or an array of synchronized cameras, as shown in Fig. 2.…”
Section: Autonomous Car Perceptionmentioning
confidence: 99%
“…2. The former is typically known as structure from motion (SfM) [29] or optical flow [28], while the latter is typically referred to as stereo vision or binocular vision (in case two cameras are used) [26]. SfM methods estimate both camera poses and the 3D points of interest from images captured from multiple views, which are linked by a collection of visual features.…”
Section: Autonomous Car Perceptionmentioning
confidence: 99%
See 1 more Smart Citation
“…Autonomous driving appears prominently in our society in the form of the advanced driver assistance system (ADAS) in both commercial and research vehicles [1]. Visual environment perception, the front-end module and key component of the ADAS, analyzes the raw data collected by the car's sensors and outputs its understanding to the driving scenario [2]- [4]. Its outputs are then used by other modules, such as prediction and planning, to ensure the safe navigation of self-driving cars in complex environments [5], [6].…”
Section: Introductionmentioning
confidence: 99%
“…2, where two networks (LEAStereo [10] is used as the backbone network) with different initializations interactively teach each other about the occlusions. Our previous work has adopted this co-teaching framework for unsupervised optical flow estimation [17], and in this paper, we employ this framework for unsupervised stereo matching. This framework can significantly improve model's robustness against outliers and further enhance the overall performance of unsupervised stereo matching.…”
Section: Introductionmentioning
confidence: 99%