2016
DOI: 10.1109/access.2016.2629987
|View full text |Cite
|
Sign up to set email alerts
|

Heterogeneous Multi-View Information Fusion: Review of 3-D Reconstruction Methods and a New Registration with Uncertainty Modeling

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
19
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 25 publications
(19 citation statements)
references
References 65 publications
0
19
0
Order By: Relevance
“…Besides positioning, a seamless handover mechanism is proposed by fusing GPS information of mobile terminals and received signals from the mobile terminals [26]. In addition to the fusion of RF and visual modalities, methods for fusing multiple streams of visual data are extensively studied in the literature to reconstruct three-dimensional objects and scenes [27] or to compensate for single camera's limited FoV [21], [28] or occlusions [29]. However, these studies do not consider the privacy in collecting highly private sensitive information, e.g., trajectory of humans viewed in visual data or mobile users tracked by GPS.…”
Section: B Related Work and Organizationmentioning
confidence: 99%
“…Besides positioning, a seamless handover mechanism is proposed by fusing GPS information of mobile terminals and received signals from the mobile terminals [26]. In addition to the fusion of RF and visual modalities, methods for fusing multiple streams of visual data are extensively studied in the literature to reconstruct three-dimensional objects and scenes [27] or to compensate for single camera's limited FoV [21], [28] or occlusions [29]. However, these studies do not consider the privacy in collecting highly private sensitive information, e.g., trajectory of humans viewed in visual data or mobile users tracked by GPS.…”
Section: B Related Work and Organizationmentioning
confidence: 99%
“…where the coefficients s c and s i are the scale coefficients with regard to camera error term and IMU error term. The camera feature re-projection error E C (t, q) between camera frame t and frame q is given by (15) where f t,j and f q,j are the j th feature point locations at image t and image q respectively; π (•) denotes the feature reprojection at image domain; q t,j is the information matrix corresponding to the j th feature correspondence. For further details in computing the re-projection errors and information matrix, please refer to [19].…”
Section: A Vehicle Dynamic Analysismentioning
confidence: 99%
“…Complementary to vehicle cameras, inertial measurement units (IMUs) are also able to provide meaningful information for assistive driving. As a proprioceptive sensor, IMU has the heterogeneous and complementary characteristics to cameras, which, by contrast, belong to exteroceptive categories [15]. In general, an IMU commonly consists of tri-axes accelerometers, tri-axes gyroscopes and tri-axes magnetometers.…”
Section: Introductionmentioning
confidence: 99%
“…1. In order to obtain image plane of virtual camera, a homography-based approach described in [5] has been used which fuses inertial data from IS and image plane of real camera to produce the corresponding virtual cameras image plane.…”
Section: A 3d Data Registrationmentioning
confidence: 99%