2022
DOI: 10.3390/s22145241
|View full text |Cite
|
Sign up to set email alerts
|

Self-Supervised Sidewalk Perception Using Fast Video Semantic Segmentation for Robotic Wheelchairs in Smart Mobility

Abstract: The real-time segmentation of sidewalk environments is critical to achieving autonomous navigation for robotic wheelchairs in urban territories. A robust and real-time video semantic segmentation offers an apt solution for advanced visual perception in such complex domains. The key to this proposition is to have a method with lightweight flow estimations and reliable feature extractions. We address this by selecting an approach based on recent trends in video segmentation. Although these approaches demonstrate… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(3 citation statements)
references
References 67 publications
0
3
0
Order By: Relevance
“…Enhancing Autonomous Navigation with Diverse Sidewalk Segmentation: Autonomous navigation systems have traditionally relied on generic models for sidewalk detection [ 50 ], which may not accurately reflect the diverse characteristics of different urban settings. By integrating customized pathway segmentation with the multi-resolution and diverse sidewalk and pedestrian route detection capabilities of the DELTA dataset, these systems can achieve a much deeper understanding of urban landscapes.…”
Section: Use Casesmentioning
confidence: 99%
“…Enhancing Autonomous Navigation with Diverse Sidewalk Segmentation: Autonomous navigation systems have traditionally relied on generic models for sidewalk detection [ 50 ], which may not accurately reflect the diverse characteristics of different urban settings. By integrating customized pathway segmentation with the multi-resolution and diverse sidewalk and pedestrian route detection capabilities of the DELTA dataset, these systems can achieve a much deeper understanding of urban landscapes.…”
Section: Use Casesmentioning
confidence: 99%
“…Machine learning and artificial intelligence techniques have enabled adaptive behavior and improved user-machine interaction (Tomari et al, 2012). Sensor fusion and perception algorithms have enhanced the wheelchair's perception capabilities, enabling accurate obstacle detection and environment understanding (Pradeep et al, 2022).…”
Section: Previous Studies On Human-robot Interactionmentioning
confidence: 99%
“…Traditional manual wheelchairs require constant physical exertion and control from the user, limiting their ability to traverse challenging terrains or navigate crowded spaces. In contrast, smart robotic wheelchairs employ a range of sensors, such as depth cameras, LiDAR (Light Detection and Ranging), and ultrasonic sensors, to perceive the surrounding environment and detect obstacles (Pradeep et al, 2022). This environment perception is further enhanced through the integration of machine learning algorithms and computer vision techniques, enabling the wheelchair to recognize and classify objects and obstacles in real-time.…”
Section: Introductionmentioning
confidence: 99%