2019 11th International Symposium on Image and Signal Processing and Analysis (ISPA) 2019
DOI: 10.1109/ispa.2019.8868508
|View full text |Cite
|
Sign up to set email alerts
|

Vehicle pose estimation via regression of semantic points of interest

Abstract: In this paper we address the problem of extracting vehicle 3D pose from 2D RGB images. An accurate methodology is presented that is capable of locating 3D coordinates of 20 pre-defined semantic vehicle points of interest or keypoints from 2D information. The presented two-step pipeline provides a straightforward way of extracting three-dimensional information from planar images and avoiding also the usage of other sensor that would lead to a more expensive and hard to manage system. The main contribution of th… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
8
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 12 publications
(8 citation statements)
references
References 30 publications
0
8
0
Order By: Relevance
“…Pipelines using deep learning have seen great successes in areas such as human pose estimation [29,50,52,75,76], and pose estimation of household objects [23,48,56]. With the growing interest in self-driving vehicles, research has also focused on jointly estimating vehicle shape and pose [12,33,37,44,69]. Many open-source driving datasets have also been released for benchmarking [11,68,80].…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Pipelines using deep learning have seen great successes in areas such as human pose estimation [29,50,52,75,76], and pose estimation of household objects [23,48,56]. With the growing interest in self-driving vehicles, research has also focused on jointly estimating vehicle shape and pose [12,33,37,44,69]. Many open-source driving datasets have also been released for benchmarking [11,68,80].…”
Section: Related Workmentioning
confidence: 99%
“…Despite its popularity (e.g., the model is also used in human shape estimation and face detection [89]), pose estimation with active shape models leads to a non-convex optimization problem and local solvers get stuck in poor solutions, and are sensitive to outliers [84,89]. More recently, research effort has been devoted to end-to-end learning-based 3D pose estimation with encouraging results in human pose estimation [35] and vehicle pose estimation [12,33,37,44,69]; these approaches still require a large amount of 3D labeled data, which is hard to obtain in the wild.…”
Section: Introductionmentioning
confidence: 99%
“…Pipelines using deep learning have seen great successes in areas such as human pose estimation [29,49,51,74,75], and pose estimation of household objects [23,47,55]. With the growing interest in self-driving vehicles, research has also focused on jointly estimating vehicle shape and pose [12,33,37,44,68]. Many open-source driving datasets have also been released for benchmarking [11,67,79].…”
Section: Related Workmentioning
confidence: 99%
“…Despite its popularity (e.g., the model is also used in human shape estimation and face detection [88]), pose estimation with active shape models leads to a non-convex optimization problem and local solvers get stuck in poor solutions, and are sensitive to outliers [83,88]. More recently, research effort has been devoted to end-to-end learning-based 3D pose estimation with encouraging results in human pose estimation [35] and vehicle pose estimation [12,33,37,44,68]; these approaches still require a large amount of 3D labeled data, which is hard to obtain in the wild.…”
Section: Introductionmentioning
confidence: 99%
“…Object pose estimation from 3D point clouds is an important problem in robot perception, with applications including industrial robotics [3]- [6], self-driving cars [7]- [13], and domestic robotics [1], [2], [14]- [17]. Availability of pose-annotated datasets has fueled recent progress towards solving this problem [1]- [3], [7]- [9].…”
Section: Introductionmentioning
confidence: 99%