2020
DOI: 10.1007/978-3-030-58796-3_13
|View full text |Cite
|
Sign up to set email alerts
|

Can We Unify Perception and Localization in Assisted Navigation? An Indoor Semantic Visual Positioning System for Visually Impaired People

Abstract: Navigation assistance has made significant progress in the last years with the emergence of different approaches, allowing them to perceive their surroundings and localize themselves accurately, which greatly improves the mobility of visually impaired people. However, most of the existing systems address each of the tasks individually, which increases the response time that is clearly not beneficial for a safety-critical application. In this paper, we aim to cover scene perception and visual localization neede… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 9 publications
(4 citation statements)
references
References 18 publications
0
4
0
Order By: Relevance
“…Our work is concerned about the lightweight network to process scenarios captured by sensors for more efficient computing and processing capabilities. The model is deployed on a wearable system composed of a Realsense L515 and an Nvidia Jetson AGX Xavier processor so that the wearable system is able to accompany visually impaired people and assist them with acoustic feedback, e.g., by combining the system with bone-conduction earphones on the wearable glasses [10]. A comprehensive set of experiments on both public datasets and real-world scenes captured by our wearable system, demonstrates the effectiveness of the presented universal perception framework.…”
Section: Introductionmentioning
confidence: 90%
See 1 more Smart Citation
“…Our work is concerned about the lightweight network to process scenarios captured by sensors for more efficient computing and processing capabilities. The model is deployed on a wearable system composed of a Realsense L515 and an Nvidia Jetson AGX Xavier processor so that the wearable system is able to accompany visually impaired people and assist them with acoustic feedback, e.g., by combining the system with bone-conduction earphones on the wearable glasses [10]. A comprehensive set of experiments on both public datasets and real-world scenes captured by our wearable system, demonstrates the effectiveness of the presented universal perception framework.…”
Section: Introductionmentioning
confidence: 90%
“…As a multi-sensor system, SurImage Stereo Camera complements the RealSense in various conditions. system [10] with a pair of smart glasses informs the visually impaired people based on semantic segmentation and 3D reconstruction. Differently, our work puts the focus on the multi-task model and satisfying the real-time requirement simultaneously.…”
Section: Related Work a Visual Assistive Systemmentioning
confidence: 99%
“…In [6], a kinetic real-time CNN was customized for the recognition of road barriers to support navigation assistance, which are usually set at the gate of a residential area or working area. The wearable system [7] with a pair of smart glasses informs the visually impaired people based on semantic segmentation and 3D reconstruction. In [8], a wearable belt is developed with deep learning system to pinpoint the exact locations of surrounding objects and the scene type in real-time.…”
Section: Related Workmentioning
confidence: 99%
“…The prominence of mobile computing has been increasing with the emergence of devices like wearable smart glasses, small camera-based sensors, and compact computers. For example, there are many applications that can benefit from semantic information such as robots working in a factory, extended reality applications, drones [45], [46], [51], [53], and wearable system exploiting scene understanding [1], [5], [7], [17], [30], [31], [49], [50]. These systems process images or videos in real-time to estimate pose and guide movement.…”
Section: Introductionmentioning
confidence: 99%