2017 IEEE Intelligent Vehicles Symposium (IV) 2017
DOI: 10.1109/ivs.2017.7995857
|View full text |Cite
|
Sign up to set email alerts
|

Mono-vision based moving object detection in complex traffic scenes

Abstract: Vision-based dynamic objects motion segmentation can significantly help to understand the context around vehicles, and furthermore improve road traffic safety and autonomous navigation. Therefore, moving object detection in complex traffic scene becomes an inevitable issue for ADAS and autonomous vehicles. In this paper, we propose an approach that combines different multiple views geometry constraints to achieve moving objects detection using only a monocular camera. Self-assigned weights are estimated online… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
4
2

Relationship

1
5

Authors

Journals

citations
Cited by 6 publications
(7 citation statements)
references
References 23 publications
0
7
0
Order By: Relevance
“…It's hard to give an objective comparison against state of the art, as we are proposing a method to work on fisheye cameras. No public automotive fisheye dataset exists with appropriate ground truth (although we acknowledge that the fisheye data augmentation on existing large-scale datasets [27] may alleviate the issue), and existing methods [10], [11], [25] are designed to work on standard field of view cameras. However, if we observe the published results of MODNet [11] we can see that it can sometimes suffer from similar false positives as our proposal, for example as shown in Figure 9 1 .…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…It's hard to give an objective comparison against state of the art, as we are proposing a method to work on fisheye cameras. No public automotive fisheye dataset exists with appropriate ground truth (although we acknowledge that the fisheye data augmentation on existing large-scale datasets [27] may alleviate the issue), and existing methods [10], [11], [25] are designed to work on standard field of view cameras. However, if we observe the published results of MODNet [11] we can see that it can sometimes suffer from similar false positives as our proposal, for example as shown in Figure 9 1 .…”
Section: Resultsmentioning
confidence: 99%
“…In our case, the values of the weights were empirically set to (1.0, 1.0, 0.2, 0.2) in order to assign more importance to the epipolar and positive depth constraints, which are always true, as opposed to the positive height and anti-parallel ones that require stronger assumptions on the scene. An adaptive approach in the selection of the weights such as the one used by Frémont et al [25], where the skewness of the reconstruction error is used as an estimate of the noisiness of the distribution, could be taken into consideration. However, it has to be noted that the parameters of the constraint deviation distribution can change because of the scene content (e.g.…”
Section: F Motion Likelihood Calculationmentioning
confidence: 99%
“…In [5], the multi-view geometry and the structure consistency constraints are combined to segment moving objects in the scene with a monocular camera. Such redundant constraints ensure the high detection precision in degraded circumstances.…”
Section: A Geometric Constraint Based Detectionmentioning
confidence: 99%
“…To this end, factorization-based scene motion segmentation presented in Section 3 is employed. Alternatively, multiple-view motion detection can also be performed [35]. Based on the rough scene segmentation a multi-target tracking (MTT) is started to manage dynamic regions.…”
Section: Track-before-detect Frameworkmentioning
confidence: 99%
“…The TbD-SfM is initialized with rough motion segments (see Section 3 or alternatively [35]. In this stage, feature points are assigned to the inputted dynamic regions.…”
Section: Track-before-detect Frameworkmentioning
confidence: 99%