2018 IEEE/ION Position, Location and Navigation Symposium (PLANS) 2018
DOI: 10.1109/plans.2018.8373507
|View full text |Cite
|
Sign up to set email alerts
|

First-person indoor navigation via vision-inertial data fusion

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
5
0

Year Published

2018
2018
2019
2019

Publication Types

Select...
3
1

Relationship

2
2

Authors

Journals

citations
Cited by 4 publications
(5 citation statements)
references
References 24 publications
0
5
0
Order By: Relevance
“…A comparative study of IMU and camera based localization can be seen in [102]. Most recent work on hybrid localization is discussed in [103,104,105,106,107,108,109]. Among these studies, the hybrid indoor localization can be used for better performance and which reduces IMU sensor and camera position errors.…”
Section: Related Workmentioning
confidence: 99%
“…A comparative study of IMU and camera based localization can be seen in [102]. Most recent work on hybrid localization is discussed in [103,104,105,106,107,108,109]. Among these studies, the hybrid indoor localization can be used for better performance and which reduces IMU sensor and camera position errors.…”
Section: Related Workmentioning
confidence: 99%
“…This sampling rate is considered fast enough for human's visual inspection. However, in case some applications such as automatic activity recognition may require higher sampling rate, the original angle can be synchronized with the video starting from sample C (17).…”
Section: Synchronization Of Video and Imu Datamentioning
confidence: 99%
“…The accuracy of activity recognition when camera and IMU were fused was about 10% higher than that of using camera or IMU alone. Farnoosh et al [17] fused inertial data and video recorded from a smartphone for indoor navigation. The inertial sensor was used to estimate the smartphone orientation, and the navigation accuracy was improved compared to navigation without orientation estimation.…”
mentioning
confidence: 99%
“…A more general insight into this process for extracting information available in the relationships between multiple data streams could short circuit this trial-and-error process to underpin progress across numerous domains and is, therefore, urgently needed. While the commonality of the underlying mathematics has been widely discussed (e.g., [2]), the authors propose that forward progress requires establishing a common scientific framework for multimodal data fusion (MMDF) to enable apples-to-apples comparisons of solutions developed for applications as diverse as first-person indoor navigation [3], data-driven decision support systems for assisting medical diagnosis [4], real-time water quality anomaly detection [5], and production line optimization [6]. Such a framework would enable the research community to integrate lessons learned from MMDF efforts across domains-domains which, despite the varying deliverables, use data with similar characteristics (varying dimensionalities, resolutions, noise/uncertainty patterns, gaps, etc.).…”
Section: Introductionmentioning
confidence: 99%