2020
DOI: 10.5194/isprs-archives-xliii-b4-2020-391-2020
|View full text |Cite
|
Sign up to set email alerts
|

A 3d Map Aided Deep Learning Based Indoor Localization System for Smart Devices

Abstract: Abstract. Indoor positioning technologies represent a fast developing field of research due to the rapidly increasing need for indoor location-based services (ILBS); in particular, for applications using personal smart devices. Recently, progress in indoor mapping, including 3D modeling and semantic labeling started to offer benefits to indoor positioning algorithms; mainly, in terms of accuracy. This work presents a method for efficient and robust indoor localization, allowing to support applications in large… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2021
2021
2021
2021

Publication Types

Select...
1
1

Relationship

1
1

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 20 publications
0
3
0
Order By: Relevance
“…The data gathering and map creating methods are similar to the previous work (Yang, et al, 2020), in which the 3D map is built by the RGBD SLAM (RTAB-map) with Kinect V1 RGBD camera and LooMo robot, and Wi-Fi fingerprints in radio map are collected by a laptop with a 3 m interval, as a balance between labor cost and accuracy, between every two calibration points. In this work, we implement experiments in another office hallway (55 m*3 m) at the Ohio State University, see Figure 3 and Table 1.…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…The data gathering and map creating methods are similar to the previous work (Yang, et al, 2020), in which the 3D map is built by the RGBD SLAM (RTAB-map) with Kinect V1 RGBD camera and LooMo robot, and Wi-Fi fingerprints in radio map are collected by a laptop with a 3 m interval, as a balance between labor cost and accuracy, between every two calibration points. In this work, we implement experiments in another office hallway (55 m*3 m) at the Ohio State University, see Figure 3 and Table 1.…”
Section: Methodsmentioning
confidence: 99%
“…Test datasets are built on 18 points, where 3 laptops (VAIO Z Canvas, ThinkPad X1, HP ProBook) and 2 cameras (Kinect V1, SONY XPERIA X smartphone) are used for collecting Wi-Fi RSS and query images respectively. The configuration of test points is set as in our previous work (Figure 6) (Yang et al, 2020), which means that black points are randomly set in the hallway and green points are set between calibration points aligned with the middle line of the hallway where the mapping robot LooMo will pass over it. Therefore, the query images recorded by Kinect have 6DoF camera pose obtained from RGBD SLAM as ground-truth, while query images recorded by the smartphone have only 2D location ground-truth.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation