2022
DOI: 10.1109/access.2022.3190014
|View full text |Cite
|
Sign up to set email alerts
|

Characterization of Semantic Segmentation Models on Mobile Platforms for Self-Navigation in Disaster-Struck Zones

Abstract: The role of unmanned vehicles for searching and localizing the victims in disaster impacted areas such as earthquake-struck zones is getting more important. Self-navigation on an earthquake zone has a unique challenge of detecting irregularly shaped obstacles such as road cracks, debris on the streets, and water puddles. In this paper, we characterize a number of state-of-the-art Fully Convolutional Network (FCN) models on mobile embedded platforms for self-navigation at these sites containing extremely irregu… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(1 citation statement)
references
References 45 publications
0
1
0
Order By: Relevance
“…The A3C algorithm's performance in maze navigation was proposed to be enhanced by employing unsupervised auxiliary tasks by Jaderberg M. The proposed algorithm increased the convergence speed, resilience, and success rate. Li presented a DQN [21] and visual serving-based path-planning technique for mobile robots. To accomplish indoor autonomous navigation, the initial environment and the target image were collected as inputs, and matching relations and control techniques were formed through training.…”
mentioning
confidence: 99%
“…The A3C algorithm's performance in maze navigation was proposed to be enhanced by employing unsupervised auxiliary tasks by Jaderberg M. The proposed algorithm increased the convergence speed, resilience, and success rate. Li presented a DQN [21] and visual serving-based path-planning technique for mobile robots. To accomplish indoor autonomous navigation, the initial environment and the target image were collected as inputs, and matching relations and control techniques were formed through training.…”
mentioning
confidence: 99%